Blog Projects, Tips, Tricks, and How-Tos

Adding Disk Space To An Ubuntu Server VM


This morning, I worked on an Ubuntu Server 14.04 guest who’s disk space has been increased by the VMWare host. LVM was configured on the guest during installation, so that simplified the process of getting the extra space rolled into the original partitioning scheme. Below is a set of instructions that I was able to use on this live server without interruption in service. Your results may vary though. There are GUI tools that you can use for much of this, but I prefer to work in the terminal as the knowledge tends to be more universal. Being able to work without a GUI is also a requirement with many server systems, and in emergency recovery situations as well.

The Process

I started off by getting a view of the current disk space available on the server. I knew that the host had increased the size of disk /dev/sda from 20GB to approximately 100GB, so I used parted to output the unallocated space from the command line (Listing 1).

Listing 1

$ sudo parted /dev/sda print free Model: VMware Virtual disk (scsi) Disk /dev/sda: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 256MB 255MB primary ext2 boot 256MB 257MB 1048kB Free Space 2 257MB 21.5GB 21.2GB extended 5 257MB 21.5GB 21.2GB logical lvm 21.5GB 21.5GB 1049kB Free Space

I didn’t see that extra 80GB of space anywhere. That’s because to I needed to force the Linux kernel to rescan the SCSI drive for changes. This can be done through the sys virtual file system, which gives access to kernel information, as well as giving some configuration and control access. I used sudo with bash -c to handle the echo correctly and not get a permission denied error. Also note that the 2:0:0:0 part of the path may be different on your system.

Listing 2

$ sudo bash -c 'echo "1" > /sys/class/scsi_disk/2\:0\:0\:0/device/rescan'

When I ran the parted command from above again, I saw the 80GB (approximately) of additional disk space.

Listing 3

$ sudo parted /dev/sda print free Model: VMware Virtual disk (scsi) Disk /dev/sda: 107GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 32.3kB 1049kB 1016kB Free Space 1 1049kB 256MB 255MB primary ext2 boot 256MB 257MB 1048kB Free Space 2 257MB 21.5GB 21.2GB extended 5 257MB 21.5GB 21.2GB logical lvm 21.5GB 107GB 85.9GB Free Space

Next, I used the unallocated space to create a partition using parted.

Listing 4

$ sudo parted -- /dev/sda mkpart primary 21.5GiB -1s

The two dashes right after the parted call keep the -1s from causing an invalid argument error. It tells parted not to expect any more options. -1s tells parted to use the rest of the unallocated space for the partition. Otherwise, I’d have to calculate the end of the partition myself.

Once this was done, I went ahead and ran partprobe to sync the kernel’s partition table with the disk’s. This prevented me from having to reboot the server.

Listing 5

$ sudo partprobe /dev/sda

The next thing to do was use pvcreate to create a physical volume which can be added into the logical volume group for the drive later. Just make sure that your partition number (the 3 at the end of sda3) matches the partition you just created.

Listing 6

$ sudo pvcreate /dev/sda3

In order to add the physical volume to the volume group, I needed to find the volume group’s name.

Listing 7

$ sudo vgdisplay ...snip... --- Volume group --- VG Name ustest-vg ...snip...

In my case the volume group was ustest-vg. This will most likely follow the [hostname]-vg naming convention, so change it appropriately for your server.

I was now ready to extend the logical volume group to include the new physical volume.

Listing 8

$ sudo vgextend ustest-vg /dev/sda3

Next, I had to find the name of the logical volume so I could extend it. Remember that the name of your volume group (ustest-vg) will vary with the hostname. Also notice that you want the LV Path that ends in root, not swap.

Listing 9

$ sudo lvdisplay ...snip... --- Logical volume --- LV Path /dev/ustest-vg/root ...snip...

Extending the logical volume has a similar form to extending the volume group.

Listing 10

$ sudo lvextend /dev/ustest-vg/root /dev/sda3

Lastly, before I could start using this new space, the file system needed to be re-sized.

Listing 11

$ sudo resize2fs /dev/ustest-vg/root

Running df -h should now show the increased space available in that logical volume. In my case this was an increase from about 20GB to approximately 100GB.

Listing 12

$ df -h ...snip... Filesystem Size Used Avail Use% Mounted on /dev/mapper/spwebapps1--vg-root 95G 12G 80G 13% / ...snip...


The process of extending the LVM disk space on a system is involved, but not difficult after you’ve stepped through the process once or twice. LVM also makes this process easier than if you use traditional partitions on your disks.

Have questions, comments, and/or suggestions? Let me know in the comments section below. Also, check out the Resources section for additional reading on this topic.


  1. The Parted User Manual
  2. – How to Increase the size of a Linux LVM by expanding the virtual machine disk
  3. Pluralsight – Resizing Linux drives in VMware | Give me my space part II

Running Clojure on the BeagleBone Black


I’ve found a few articles about using the Clojure programming language on the BeagleBone Black. However, all of them I’ve seen gloss over the installation steps. I got Clojure 1.6.0 running on my rev B BeagleBone Black (aka BBB from here on) and I’ve outlined the steps to do it in this post. If you haven’t done anything with your BBB yet, plug it in to a free USB port and open START.htm when it mounts as a drive.

The Install

Make sure that your BBB has Internet access via Ethernet. Once it’s connected to an Ethernet network (with DHCP) you can find the IP address and use it from there on out. However, first you need to connect to the BBB using ssh and the USB-Ethernet bridge interface.

Listing 1

ssh root@

Add a new user and set the password so that you won’t have to run as root. Be sure to change jwright to your preferred user name.

Listing 2

useradd jwright -m; passwd jwright

Logout (Ctrl+D) and ssh back in as the new user.

Listing 3

ssh jwright@

Get the Ethernet IP address (inet addr) if you want.

Listing 4

/sbin/ifconfig eth0

I installed Java 1.7 via the “Installing Oracle’s JDK on BeagleBone Black” instructions here. I’ve summarized and customized the steps below.

Download the Linux ARM v6/v7 Soft Float ABI JDK for the ARM platform. The download requires you to accept a license agreement, so I downloaded it to my laptop and then used scp to copy it over to the BBB. The “7u55″ portion of the filename in Listing 5 may vary for you. Your IP address and user name will also vary, of course. I set up a ~/bin directory for my user to put the files in, but you would usually put Java in a location that’s more accessible system-wide like /usr/bin or /usr/local/bin.

Listing 5

mkdir ~/bin;cd ~/bin;scp jwright@ ./

Extract the file you downloaded. I rm the file after extraction because of the limited space available on the BBB. If you wanted to be even more efficient you could actually extract the archive as you’re copying it across the network.

Listing 6

tar xvf jdk-7u55-linux-arm-vfp-sflt.tar.gz;rm jdk-7u55-linux-arm-vfp-sflt.tar.gz

Add the JDK to your PATH environment variable and make it permanent in .bashrc. Again, note that the version (55) and user name may vary for you.

Listing 7

cd;echo "export PATH=$PATH:/home/jwright/bin/jdk1.7.0_55/bin" | tee -a .bashrc

Set the JAVA_HOME for your installation, making sure to update the version number and user name.

Listing 8

cd;echo "export JAVA_HOME=/home/jwright/bin/jdk1.7.0_55" | tee -a .bashrc

Source (load) the .bashrc file and double check to make sure that Java is installed correctly.

Listing 9

source ~/.bashrc;java -version

At this point it would be a good idea to reboot your BBB and run the java -version command again to make sure that the changes to your bash profile worked.

Listing 10

su -c /sbin/reboot root

After you log back in and check Java, you can download the current version of Clojure. Please check for the proper download link here.

Listing 11

cd ~/bin;wget

You can then start the Clojure REPL and you’re at the starting point for using Clojure on the BeagleBone Black.

Listing 12

java -cp ~/bin/clojure-1.6.0.jar clojure.main


If you’re new to Clojure I highly recommend that you give it a try. If you come from an Object Oriented Programming background it takes a shift in thinking, but it’s well worth the investment. Have a look at and their documentation section to get started. They also have a helpful community that seems to be continuously growing.


  1. Clojure on the Beagleboard
  2. Clojure’s Quick Start Guide
  3. BeagleBone Black Quick Start Guide
  4. Install Java on the BeagleBone Black
  5. Clojure Download Page
  6. Clojure Home Page
  7. Instructions by Adafruit on Installing Alternate OSes on the BBB

An Ultrasonic Range Sensor, Linux, Ruby, and an Arduino

1. Intro

Recently, I needed to count open-close machine cycles for a customer. We couldn’t trust the machine readouts, so we needed an external device to count the cycles accurately. This device also needed to be built quickly and easily moved from machine to machine. Those requirements pushed my thoughts toward using an ultrasonic sensor and an Arduino. Although Ruby is not the first language I normally think of for the client-side of something like this, the customer uses Ruby extensively in their enterprise. I knew they’d be able to support a Ruby app without any trouble. Below is a trimmed down version of what my associate Tom and I came up with.

2. Hardware

The device is very simple, consisting of three main components:

  1. Radio Shack Ultrasonic Range Finder Unit
  2. Arduino Uno Rev 3
  3. RadioShack Project Enclosure

Two holes were drilled into one side of the box for the Arduino’s USB port and power connector. However, we ended up powering the Arduino though just the USB connection. Two other holes were drilled in one end of the box for the ultrasonic’s emitter and receiver modules.

Holes Cut in Project Box

Holes Cut in Project Box

Following the how-to in the resources section below, the following pins were connected between the ultrasonic sensor and the Arduino with some spare computer connectors. Using something like SchmartBoard jumpers would make the connections even easier.

Ultrasonic Pin Arduino Pin
SIG Digital 7

The Arduino and ultrasonic sensor were then mounted into the project box using double sided tape and hot melt glue.

Mounted Arduino and Sensor

Mounted Arduino and Sensor

Next the box was mounted on a magnetic base and arm, and the unit was ready for software.

Finished Unit with Base on Side

Finished Unit with Base on Side

Finished Unit Standing Up On Base

Finished Unit Standing Up On Base

3. Software

The only thing that gave me trouble in Ruby was the read_timeout setting for the serial port. gets and readline wouldn’t work properly without setting this value much higher than I would have expected. With the timeout set too low, gets would return before it had read a full line from the Arduino. This would throw the cycle counting code off when a number like 68 was read as a 6 followed by an 8 in a separate read.

All of the code below is available on GitHub.

require 'serialport'

# Make sure we got the right number of arguments
if ARGV.size < 2
  puts "Usage: ruby client.rb [serial_port] [inches_to_target]"
  puts "Example: ruby client.rb /dev/ttyACM0 40"


# Latch variables so we only trigger once on a close or open
is_open = false
is_closed = false

# Keeps track of the number of open-close cycles
cycle_count = 0

# Parameters used to set up the serial port
port_str  = ARGV[0] # The serial port is grabbed from the command line arguments
baud_rate = 9600
data_bits = 8
stop_bits = 1
parity = SerialPort::NONE
 # Set up the serial port with the settings from above
sp =, baud_rate, data_bits, stop_bits, parity)

# We have to set the read timeout to a very high value or we may get partial reads

# Grab the distance to the target from the command line arguments
inches_to_target = ARGV[1].to_i

 # Wait to make sure the serial port is initialized

# Loop forever reading from the serial port
while true do
	# Grab the next string from the serial port
	value = sp.gets.chomp
	# Check to see if we have a closed condition within a +/- 3 inch range (adjust as needed)
	if not value.nil? and value.to_i > inches_to_target - 3 and value.to_i < inches_to_target + 3
		# Make sure the target wasn't already closed
		if not is_closed
			#puts "Closed"

			# If the target was previously open we want to increment the cycle count
			if is_open
				# Keep track of the cycle count
				cycle_count += 1

				# Let the user know what the current cycle count is
				puts cycle_count

			# Flip the latch bits so that we only enter here once
			is_closed = true
			is_open = false
	# We're outside the range the defines the closed condition
		# Make sure the target wasn't already open
		if not is_open
			#puts "Open"

			# Flip the latch bits so that we only enter here once
			is_open = true
			is_closed = false

The require 'serialport' line and lines 33 through 41 allow us to establish a connection to the Arduino. You'll need to make sure the serialport gem is installed as shown below in the Usage section. If you're not interested in the logic that determines open versus closed, focus on the sp.gets.chomp statement and ignore everything below that except the nil check.

I modified existing code for the Arduino, and the original source file is listed in the resources section under "Support Files". I changed the main loop to send only the range value (in inches) that I was interested in. See the original source file for an example of how to use centimeters.

// Main program loop
void loop()
  long rangeInInches; //The distance to the target in inches
  // Get the current signal time
  // Convert the time to inches
  rangeInInches = ultrasonic.microsecondsToInches();
  // Send the number of inches to the target to the client

4. Usage

The device is set up by aiming the ultrasonic's emitter and receiver at the object that opens and closes. This could be something like a sliding door, a machine's parting line, or a robot arm that always returns to the same place during a cycle. An important thing to remember is that this sensor plays by different rules than an optical sensor. The ultrasonic sensor will register against clear things like Plexiglas. The best way to get good readings is by shooting against a hard surface that's perpendicular to the line of sight of the ultrasonic sensor. A measurement should be taken of the distance from the ultrasonic sensor to the target object. This will be used when starting the Ruby application.

This application has only been tested with Ruby 2.0.0, but should work fine with 1.9.3. If you have any questions on how to install Ruby on Linux, have a look at the RVM and Ruby parts of Ryan Bigg's blog post here. I would highly discourage you from installing Ruby from most Linux distribution repositories, especially Ubuntu's.

In order to get the Ruby app to run, the serialport gem has to be installed first.

$ gem install serialport

If the program is run without any arguments it will display a usage message.

$ ruby client.rb
Usage: ruby client.rb [serial_port] [inches_to_target]
Example: ruby client.rb /dev/ttyACM0 40

The serial port normally shows up as /dev/ttyACMx on my Ubuntu based laptop, where x is a number between 0 and 3 usually. In some cases your Arduino might show up as something like /dev/ttyUSBx. The inches_to_target argument is the distance to where the ultrasonic sensor should see the target (closed condition). If the application sees anything outside of a +/- range around this distance it will count it as an open condition. When it sees something within this range again (closed), it counts that as a cycle. At the end of each cycle the application outputs a line showing the cycle count. You could easily add code that would display the present time, the last time a cycle was made, and the difference between the two, which can give you the cycle time of a machine.

5. Conclusion

In practice this system has been fairly intuitive and easy to use, although the ultrasonic sensor's reliable range is far less than the vendor's spec of 157 inches. Again, an important thing to remember when trying to get reliable readings is to shoot against a hard surface, and keep the "beam" of the ultrasonic sensor as perpendicular (90 degrees) to the face of the target object as possible.

Still have questions? Have suggestions that will make the hardware or software better? Please let us know in the comments section.

6. Resources

  1. Ultrasonic Range Finder - Radio Shack
  2. Ultrasonic Range Finder User's Guide
  3. Ultrasonic Range Finder How-To - Radio Shack Blog
  4. Ultrasonic Range Finder Support Files
  5. Arduino Uno Rev 3
  6. ruby-serialport Ruby Serial Library
  7. Example of Using Ruby getc to Read From Arduino
  8. Using Ruby with the Arduino - Arduino Forums

Collaborative Design with Open Design Engine

1. Intro

Pulling together hardware designers from all over the Internet presents several challenges. Among these is the task of keeping things like project time lines, roadmaps, forums, wikis, and issue tracking organized and cohesive. There may also be the need to collect and organize design files such as CAD, which don’t fit well into the traditional paradigm used by source code management systems like git and CVS. The main reason for this is because CAD files tend to be binary, and are hard to diff or merge in any meaningful way.

Addressing these challenges any many more is what the Redmine based Open Design Engine project (Figure 1.1) is seeking to do. Open Design Engine, or ODE for short, is an open source web application that’s an initiative of Mach 30, an organization committed to fostering a space-faring future for the human race through safe, routine, reliable, and sustainable access to space. Mach 30’s mission is a lofty one but the organization has laid a solid foundation, which includes the implementation of ODE, on which to move toward this future.

Figure 1.1 – Open Design Engine (ODE) Banner

Mach 30 likes to “think of Open Design Engine as the Source Forge for hardware”1, and I would say that ODE is well positioned to fill that role. You can download ODE and self-host it if you like, or you can simply use the Mach 30 hosted version by going to

I thought I would put together a quick start guide that covers a few of ODE’s basic features. If there’s anything that I miss please feel free to ask questions in the comments section, or have a look at the ODE forums .

2. Getting Started

The first thing that you’ll need to do is register. The Open Design Engine registration process is very similar to Redmine’s (Figure 2.1) except that ODE requires you to accept its Terms of Service (ToS) before completing the registration.

Figure 2.1 – Redmine’s Registration Form

Whenever you need to sign in to ODE again after registration, look for the Sign in link in the upper right hand corner of any ODE page. Once you’re signed in, the Sign in link will turn to Sign out and you’ll see the addition of your user name and a My account link (Figure 2.2). With registration completed you can start experimenting with some ODE features and evaluating any existing projects you think you’d like to be involved in. We’ll take a look at the Open Design Engine project entry (Figure 2.2) to help you get familiar with the layout.

Figure 2.2 – The ODE Project Tabs on

The first thing to notice is that there’s a Help link toward the upper left hand corner of every ODE page. This takes you directly to the Redmine guide which would be one good place to start if you’re having trouble with an aspect of ODE’s functionality. The next link over is Projects, which will take you to a list of all the projects currently hosted on ODE. The second link from the left is My page, which leads to a customizable page that summarizes your involvement in any ODE projects that list you as a developer or manager. To customize this page you click on the Personalize this page link (Figure 2.3).

Figure 2.3 – The “Personalize this page” Link on “My page”

3. Modules

The tabs in Figure 2.2 represent Redmine modules chosen by the project manager (Figure 3.1), and will vary from project to project. Several of the tabs/modules are self-explanatory, but there are others that warrant a closer look. For instance, the Issue and New issue tabs are interesting because “Issues” are the mechanism by which you can add items to a project’s time line. Without the issue tracking system, items will never show up on the Gantt chart and calendar, even if those modules are enabled.

Figure 3.1 – Modules Available in ODE

Adding a new issue is pretty straight-forward. Using the New issue tab (Figure 3.2), you first need to set the issue type via the Tracker pull-down. Your choices are Bug, Feature, and Support, which are a bug report, a feature request, and a request for support respectively. You can also set the Priority, who the issue is assigned to (Assignee), and which Target version from the project’s roadmap the issue applies to. The Start date, Due date, and Estimated time fields are where you start setting what shows up in the calendar and Gantt chart. Even if you’re not a member of a project’s team, you can still file an issue as long as you’re signed in to ODE.

Figure 3.2 – The “New issue” Tab

The Activity and News tabs are similar in that they’re used to keep track of what’s happening in a project. The items on the Activity tab are made up of things like wiki edits, messages, forum postings, and file uploads, and you can filter these updates using the check boxes in the right hand column (Figure 3.3).

Figure 3.3 – Activity Filter Check Boxes

The News module acts like a blog for the project, and the news posts will show up under Latest news if you added that block when customizing My page. The Wiki and Forums tabs are what you would expect, and I thought they were fairly intuitive to use. In my experience these are where the bulk of the planning and design collaboration happen.

The last thing I’m going to cover is ODE’s ability to handle files. There are three file handling modules that I feel are the most useful, and those are Files, Repository, and DMSF, although DMSF is technically a plug-in. The Files module is useful for distributing project packages. For instance, the Files tab for the Open Design Engine project is where you would look to download ODE so that you could host it yourself. The Repository module in ODE allows you to use a Subversion (and eventually a git) repository with your project. If you have any source code to manage in addition to the rest of your project’s files, this is an invaluable tool. That brings us to DMSF (Figure 3.4).

Figure 3.4 – The “DMSF” Tab

The DMSF documentation states that it aims to replace Redmine’s current Documents module, which is used for technical and user documentation. In my view the DMSF plug-in, especially when coupled with the Wiki module, does make the Documents module feel unneeded. Some of the features of DMSF include document versioning and locking, allowing multiple downloads at once via a zip file, and optional full-text searching through text-based documents. There are several other features, and they can be found in the DMSF documentation.

The operation of this plug-in is pretty straight-forward, but there are a few buttons that we’ll take a quick look at (Figure 3.5).

Figure 3.5 – The “DMSF” Module Buttons

Buttons 1 and 2 operate on the directory level, while buttons 3 through 6 are for the sub-directory and file levels. Button 1 allows you to edit the meta data (name, description, etc) for the directory level that you’re currently at. If you were to click the button at the level shown in Figure 3.4, you would end up editing the meta data for the Documents directory. Button 2 creates a new directory at the current level. If you clicked it in Figure 3.4 you would create a new folder directly under Documents. Button 3 gives you the ability to edit the meta data for a file in the same way that button 1 does for parent directories. Button 4 is only visible when you’re working with files and allows you to lock a file so that it can’t be changed by other project members. Be careful when you use this feature so that you don’t needlessly keep other contributors from getting the access that they need to push the project forward. Button 5 (the red ‘X’) deletes a file, and button 6 allows you to set whether or not modifications to a file show up in the project’s Activity tab.

Uploading a file using DMSF requires only a few button clicks. You first need to click the Add Files button which will bring up a standard open file dialog. Once you’ve selected the file, you can click the Start Upload button which will bring up the form shown in 3.6.

Figure 3.6 – The “DMSF” Upload Meta Data Setting

This form allows you to set the meta data for the file you’re uploading. If the file already exists in the current directory, ODE will automatically increment the minor version number for you. You can then set other attributes like whether the file has been approved or is awaiting approval, and even attach a comment that will stay with the file. Once these things are set you can click Commit to upload the file.

There’s much more that can be done when handling files in ODE, but that will hopefully give you a good start.

4. Conclusion

This post is essentially an ODE crash course since it would be much too large if I tried to cover everything that ODE has to offer. The best way to learn the other features that are available is to set up an account and start looking around. There’s a Sandbox project where you can start the learning process in a safe environment where your actions won’t hinder other users.

Once you get comfortable with contributing to projects on ODE you’ll be in a great position to start a project of your own. If you have a project (hardware, software, mechanical, medical, whatever) that you’ve always wanted to work on but haven’t, why not start it on ODE where you have the chance of getting input from designers all over the world?

Special thanks go to J. Simmons, president of the Mach 30 foundation, for his help in writing this post. The efforts of J and the rest of the Mach 30 board have brought Open Design Engine to life and made it available to all of us, and I look forward to seeing the planned improvements implemented over time. Open Design Engine has the potential to become an invaluable tool for hardware developers and designers in general, and I’ve enjoyed my time working with it.

Please feel free to leave any comments or questions below, and have a look at for other projects, tips, how-tos, and service offerings available from Innovations Technology Solutions. Thanks for reading.


  1. Open Design Engine
  2. Open Design Engine Forums
  3. ODE’s Sandbox Project
  4. Mach30 Website
  5. Redmine Website
  6. Redmine User’s Guide
  7. DMSF Documentation

Make Self-Extracting Archives with


When making your custom scripts or software available to someone else, it’s a good idea to make that content as easy to extract and install as possible. You could just create a compressed archive, but then the end user has to manually extract the archive and decide where to place the files. Another option is creating packages (.deb, .rpm, etc) for the user to install, but then you’re more locked into a specific distribution. A solution that I like to use is to create a self-extracting archive file with the script. This type of archive can be treated as a shell script and will extract itself, running a scripted set of installation tasks when it’s executed. The reason this works is that the archive is essentially a binary payload with a script stub at the beginning. This stub handles the archive verification and extraction process and then runs any predefined commands via a script specified at the time the archive is created. This model offers you a lot of flexibility, and can be used not only for installing scripts and software but also for things like documentation.





The script is itself packaged as a self-extracting archive when you download it. You can extract the script and its support files by running the installer with a Bourne compatible shell (Listing 1).

Listing 1

$ sh Creating directory makeself-2.1.5 Verifying archive integrity... All good. Uncompressing Makeself 2.1.5........ Makeself has extracted itself. $ ls makeself* makeself-2.1.5: COPYING makeself.1 makeself.lsm README TODO

You can see from the output that I’m working with version 2.1.5 of for this post. To make things easier, you can install in your ~/bin directory, and then make sure $HOME/bin is in your PATH environment variable. You need to ensure that and are in the directory together unless you’re going to specify the location of with the --header option (Listing 3).

General Usage

Listing 2 shows the usage syntax for

Listing 2 [OPTIONS] archive_dir file_name label startup_script [SCRIPT_ARGS]

After the OPTIONS, you need to supply the path and name of the directory that you want to include in the archive. The next argument is the file name of the self-extracting archive that will be created. You can choose any name you want, but for consistency and clarity it’s recommended that the file have a .run or .sh file name extension. Next, you can specify a label that will act as a short description of the archive and will be displayed during extraction. The final argument to is the name of the script that you want to have run after extraction is complete. In turn, this script can have arguments passed to it that are represented by [SCRIPT_ARGS] in Listing 2. It’s important not to get the arguments to the startup script confused with the arguments to

Listing 3 shows some of the options for use with You can find a comprehensive list on the webpage, but in my own experience I’m usually only concerned with the options listed here.

Listing 3

--gzip : Use gzip for compression (default setting) --bzip2 : Use bzip2 for better compression. Use the '' file name extension to avoid confusion on the compression type. --header : By default it's assumed that the "" header script is stored in the same location as This option can be used to specify a different location if it's stored somewhere else. --nocomp : Do not use any compression, which results in an uncompressed TAR file. --nomd5 : Disable the creation of an MD5 checksum for the archive which speeds up the extraction process if you don't need integrity checking. --nocrc : Same as --nomd5 but disables the CRC checksum instead.

In addition to the options passed to when creating the archive, there are options that you can pass to the archive itself to influence what happens during and after the extraction process. Listing 4 shows some of these options, but again please have a look at the webpage for a full list.

Listing 4

--keep : Do not automatically delete any files that were extracted to a temporary directory. --target DIR : Set the directory (DIR) to extract the archive to. --info : Print general information about the archive without extracting it. --list : List the files in the archive. --check : Check the archive for integrity. --noexec : Do not run the embedded script after extraction.


Let’s go through a practical example using some of the information above. If you had a directory named myprogram within your home directory and you wanted to package it, you could create the archive with the command line at the top of Listing 5.

Listing 5

$ --bzip2 myprogram/ "The Installer For myprogram" ./ Header is 402 lines long About to compress 20 KB of data... Adding files to archive named ""... ./ ./myprogram.c ./ ./myprogram CRC: 955035546 MD5: 7b74c31f31589ee236dea535cbc11fe4 Self-extractible archive "" successfully created.

Notice that I used bzip2 compression via the --bzip2 option rather than using the default of gzip. I couple this with setting the file name extension to so that the end user will have a way of knowing that I used bzip2 compression. After the compression option, I pass an argument requesting that the myprogram directory, which contains a simple C program also called myprogram, be added to the archive. After the file name specification (with the extension), we come to the description label for the archive. This can be a string of your choosing and will be displayed with the output from the extraction process. The last argument is the “startup script” that will be run when the archive is extracted. Listing 6 shows the contents of my simple startup script ( that installs the myprogram binary in the user’s bin directory, but only if they have one.

Listing 6

#!/bin/sh #Install to ~/bin if it exists if [ -d $HOME/bin ] then cp myprogram $HOME/bin/ fi

Notice that when specifying the startup script, I used the path of ./ which points to the current directory. This is a reference to the directory after the extraction, not the directory where the script resides when you’re creating the archive. Your startup script should be inside the directory that you’re adding to the archive. One other thing to note about the startup script is that you will need to set its execute bit before creating the archive. Otherwise you’ll get a Permission denied error when the makeself-header script stub tries to execute the script.

Now we transition to the end user viewpoint, where the self-extracting archive has been downloaded and we’re getting ready to run it. You can set the execute bit of the archive and run it directly, or execute it with a Bourne compatible shell the way the installer was: sh . Before we extract the archive though, lets verify its integrity and have a look the contents (Figure 7).

Listing 7

$ sh --check Verifying archive integrity... MD5 checksums are OK. All good. $ $ sh --list Target directory: myprogram drwxr-xr-x jwright/jwright 0 2011-12-20 13:49 ./ -rw-r--r-- jwright/jwright 66 2011-12-20 11:45 ./myprogram.c -rw-r--r-- jwright/jwright 99 2011-12-20 11:49 ./ -rwxr-xr-x jwright/jwright 7135 2011-12-20 11:45 ./myprogram

We can see from the first command that the archive is intact and that there are no errors. The second command shows us that the archive contains 3 files. The first is the source file myprogram.c which I left in the archive directory so that I could have the option of giving the user the source code. The next file is the startup script that will be run after extraction. The last file of course is the binary that our end user is wanting to install. Lets go ahead and install myprogram by using the execute bit on the archive (Listing 8).

Listing 8

$ chmod u+x $ ./ Verifying archive integrity... All good. Uncompressing The Installer For myprogram....

Now to test that the installation worked, we can try to run myprogram (Figure 9).

Listing 9

$ myprogram Hello world!

I can see that the program is present and did exactly what I expected it to do. Keep in mind that if ~/bin is not in your PATH variable you’ll have to supply the full path to the myprogram binary.


This has been a quick overview of what can do. I’ve found it to be a very useful script that is also very dependable and easy to use. Through the use of the startup script, along with the full complement of options, offers you a lot of flexibility when creating installers. You can create this type of self-extracting archive manually, but makes it much easier and adds great features like checksum validation.

Please feel free to leave any comments or questions below, and have a look at for other projects, tips, how-tos, and service offerings available from Innovations Technology Solutions. Thanks for reading.


  1. Homepage
  2. GitHub Page
  3. Linux Journal Article on How to Make Self-Extracting Archives Manually

Bodhi Linux On a Touchscreen Device


Welcome, in this blog post we’re going to set Bodhi Linux up on a touchscreen device. Since the last post covered touchscreen calibration, I thought I would go one step beyond that by choosing and configuring a distribution to make the touchscreen easy to use (on-screen keyboard, finger scrolling, etc). This post won’t be an exhaustive run through of everything that you can do with Bodhi on a touchscreen system, but my hope is to give you a good start. Please feel free to talk about your own customizations and ways of doing things in the comment section. We’ll be focusing on desktop touchscreens and Intel based tablets here, but Bodhi also has an ARM version that’s currently in alpha. The ARM version of Bodhi will officially support Archos Gen 8 tablets initially, and then expand support out from there. I’m using Bodhi because it has a nice Enlightenment Tablet profile that I think makes using a touchscreen system fairly natural and intuitive. You of course could also use another distro like Ubuntu (Unity) or Fedora (Gnome Shell) with your touchscreen but, as I mentioned, I’m partial to Bodhi for this use.




The Software

For this post I installed Bodhi 1.2.0 (i386) and used xinput-calibrator as the touchscreen calibration utility. I wrote a Tech Tip on xinput-calibrator last month that you can find here. If your touchscreen doesn’t work correctly out of the box, I would suggest following the instructions in that blog post before moving on. If you’re new to Bodhi Linux, you might want to have a look at their wiki. I’ve also found Lead Bodhi Developer Jeff Hoogland’s blog to be very informative, especially when I was setting Bodhi up for this post. Jeff and the other users on the Bodhi forum are very nice and helpful if you want to ask questions too.

The Hardware

My test machine was an Intel based Lenovo T60 laptop with an attached Elo Touchsystems 1515L 15″ Desktop Touchmonitor. Even if you’re working with Bodhi Linux on an ARM device though, you’ll still be able to take a lot of tips away from this post.


I put a standard installation of Bodhi on the Lenovo T60 by simply following the on-screen instructions. Once I had it installed, I booted the system and ended up at the initial Profile selection screen.

The Bodhi Linux Profile Selection Screen

Since Bodhi uses Enlightenment for it’s desktop manager, this profile selection gives you an easy way to customize the Enlightenment UI for the way you’ll use it. In this case we’ll be interacting with Bodhi via a touchscreen, so we want to choose the Tablet profile. The next screen is theme selection, and for our purposes it doesn’t matter which theme you choose.

Once you’ve chosen a theme you should be presented with the Bodhi tablet desktop. The first thing that I notice on my machine is that the Y-axis of the touchscreen is inverted. When I touch the bottom of the screen the cursor jumps to the top, and vice versa. In order to fix that we need to get the machine on a network so that we can download and install the screen calibration utility. Bodhi’s network manager applet is easy to find on the right hand side of the taskbar. After clicking on that and setting up my local wireless network, I’m ready to download and install my preferred screen calibration utility – xinput-calibrator. As I mentioned, I wrote a blog post about xinput-calibrator last month.


Now we can start on the customizations that will make our touchscreen system easier to use. The first thing that I did was install Firefox. If you’re running on a lower power device you might want to stick with Midori, which is Bodhi’s default browser. If you use Firefox, there’s a nice add-on called Grab and Drag that allows you to do drag and momentum scrolling. As you’ll see the first time you run it, Grab and Drag has quite a few settings and I think it’s worth the time to look through them. One other thing that I like to do with Firefox on a touchscreen device is hide the menu bar, but that’s just my personal preference.

If you’re going to run Midori, you’re not out of luck on touch and drag scrolling. You can add the environment variable declaration export MIDORI_TOUCHSCREEN=1 somewhere like ~/.profile to enable touch scrolling. The drawback is that touch scrolling in Midori is not all that easy to use because it doesn’t distinguish between a touch to scroll, and a touch to drag an image or select text. I’ve also found that setting the MIDORI_TOUCHSCREEN variable on Bodhi 1.2.0 can be a little finicky, so if all else fails you can prepend MIDORI_TOUCHSCREEN=1 to the command in the Exec line of Midori’s .desktop file. In version 1.2.0, a search for midori.desktop finds this file.

Xournal is an application that allows you to write notes and sketch directly on the touchscreen. If you want to take notes on your touchscreen device, this is an application that you’ll want to check out. If you want to see Xournal in action, you can watch the videos below that have sections showing Jeff Hoogland using Xournal and Bodhi’s Tablet profile. In the videos you’ll see that Jeff uses his finger which worked okay for me, but to get nicer looking notes on the 1515L I had to switch to a stylus. If you want to install Xournal, just look for references to the xournal package in your package manager or download the latest version from the Xournal website.

Another customization that I make is to set the file manager up to respond to single clicks. Bodhi 1.2.0 uses PCManFM 0.9.9 as its default file manager, so to do this open it and click Edit -> Preferences in the menu. On the General tab make sure that the Open files with single click box is checked. Alternatively, you can use the less complete but more touch friendly EFM (Enlightenment File Manager). To use EFM, you’ll need to load the EFM (Starter) module under Modules -> Files. Once you’ve loaded the module, you can launch it by touching the Bodhi menu on the left hand side of the taskbar and then Files -> Home. The first time you use EFM you’ll need to add the navigation controls by right clicking on the toolbar, clicking toolbar -> Set Toolbar Contents, and then clicking on EFM Navigation followed by a click of the Add Gadget button. Please keep in mind that EFM is a work in progress, so it’s not feature-complete.

The Enlightenment File Manager (EFM)

I’ve got PDF copies of two of the Linux magazines I normally read, so another addition I make is to install Acrobat Reader or an open source PDF reader. It’s best if you choose a reader with drag to scroll capability like Adobe Reader. If you do use Adobe Reader, make sure that you have the Hand tool selected and use a continuous page view for the easiest scrolling.

If you’re going to view images on your touchscreen system, you may want to install Ephoto which is a simple image viewer for Enlightenment. On a Bodhi/Ubuntu/Debian based system a search for the ephoto package should find what you need to install.

The Ephoto Image Viewer For Enlightenment

General Usage

Below are a few tips for when you’re using your newly set up touchscreen system. So that you can see what’s possible when running Bodhi’s Tablet profile, I’ve included the two embedded videos below from Jeff Hoogland.

  • There is an applications menu button on the right side of the quick launch bar (bottom of the screen). Clicking this button will bring up a set of Applications along with Enlightenment Widgets, and Bodhi 1.2.0 seems to have a placeholder for a Config subset. There is also a more traditional applications menu button on the left end of the taskbar.
  • You can touch and hold down on an icon (launcher) in the applications menu until it lets you drag it. You can then drag the launcher to the desktop or the quick launch bar.
  • If you touch and hold the desktop, it’s icons and the icons in the quick launch bar will start to swing and will have red X’s beside them. If you click on one of the red X’s you’ll remove that launcher. Click on the big red X in the lower right-hand corner of the screen to exit this mode.
  • To change to another workspace, simply drag your finger from right to left across the screen. There is a set of dots just above the quick launch bar that shows you which workspace you’re in. Each of the workspace desktops can be customized with their own set of icons, but the taskbar and quick launch bar stay the same.
  • You can touch the Scale windows button on the left of the task bar to get a composited window list. Once you have this list, you can close windows simply by touching and dragging them off the screen.

The Scale Windows Button On The Tablet Profile Taskbar

Bodhi Linux Tablet Usage Videos

Jeff Hoogland Showing Bodhi Linux On A Dell Duo

Jeff Hoogland Demonstrating Bodhi Linux On An ARM Device

Possible Issues

Below is a list of things that might cause you some trouble and/or confusion.

  • In my experience when the GUI asked for an administrator password, I couldn’t enter it because the dialog was modal and didn’t allow me to get to the on-screen keyboard button. A good example of this happens when I try to launch the Synaptic Package Manager.
  • If you have trouble closing a window with the Bodhi close button (far right side of the taskbar), try touching the window first to make sure it’s in focus.
  • The on-screen keyboard is not context sensitive and does not do auto-completion. I wasn’t personally bothered by this, but some avid users of other tablet and smartphone platforms might be.
  • Support for screen rotation (from portrait to landscape) will be hit and miss, and depends almost exclusively on community support. Unfortunately, many devices have closed specs so reverse engineering becomes the only solution.


That concludes this quick Project. Please feel free to leave any comments or questions below. Before signing off, I’d like to thank Jeff Hoogland for being so helpful in answering my questions while I was writing this post. A great community has gathered around Bodhi, and I’m looking forward to see where Jeff and his team take the distro in the future. If you haven’t tried Bodhi yet, I highly encourage you to head over to their website and have a look. Also, have a look at for other projects, tips, how-tos, and service offerings available from Innovations Technology Solutions. Thanks for reading.


  1. Bodhi Linux for ARM Alpha 1 – Jeff Hoogland
  2. ARM Section of Bodhi Linux Forum
  3. Bodhi Linux Forum – Arm Version of Bodhi Discussion
  4. HOWTO: Linux on the Dell Inspiron Duo
  5. Bodhi Linux Website
  6. Lead Bodhi Developer Jeff Hoogland’s Blog
  7. xinput-calibrator Page

Tech Tip – Touchscreen Calibration In Linux


Welcome, this is an Innovations Tech Tip. I recently did some work with an ELO Touchsystems 1515L 15″ LCD Desktop Touchmonitor. I was pleased with the touchmonitor’s hardware and performance, but in order to make it work properly in Linux I had to find a suitable calibration program. Out of the box on several distributions this touchscreen exhibits Y-axis inversion, where touching the top of the screen moves the cursor to the bottom and vice versa. xinput-calibrator is a project that worked well for calibration, fixing the Y-axis inversion issue, and as a bonus it works for any standard Xorg touchscreen driver.




The Software

For this post I tested on Bodhi Linux 1.2.0 (based on Ubuntu 10.04 LTS), Fedora 15, and Ubuntu 11.04. xinput-calibrator, as I mentioned, was the screen calibration utility.

The Hardware

My test machine was an Intel based Lenovo T60 laptop with an attached ELO Touchsystems 1515L 15″ LCD Desktop Touchmonitor.


Click here to go to xinput-calibrator‘s website and choose your package. Be aware that if you’re using the ARM version of Bodhi (in alpha at the time of this writing) it’s based on Debian, so you’ll want to grab the Debian testing package. You can also add a PPA if you’re running Ubuntu, but I had trouble getting that to work during my tests. Last but not least, you can grab the source and compile it yourself by downloading the tarball or using git.

Before you actually install xinput-calibrator on a freshly installed Debian based system (including Ubuntu and Bodhi), make sure to update your package management system or you’ll get failed dependencies. This is because the package management system doesn’t know what packages are available in the repositories yet. This isn’t a problem with Fedora since the package management index is updated every time you use YUM. Once you’ve ensured that the system is or will be updated, you’ll be ready to install xinput-calibrator via the package that you downloaded or the PPA.


Once xinput-calibrator is installed, it should show up in your application menu(s). Look for an item labeled “Calibrate Touchscreen”. If you don’t see it anywhere, you can launch it from the terminal with the xinput_calibrator (note the underscore) command.

Figure 1 – xinput_calibrator screenshot

Using It

The use of xinput-calibrator is very simple. You’re presented with a full-screen application that asks you to touch a series of 4 points. The instructions say that you can use a stylus to increase precision, but I find that using my finger works well for the ELO touchscreen. One of the nice features of xinput-calibrator is that it’s smart enough to know when it encounters an inverted axis. After I run through the calibration the Y-axis inversion problem is fixed, so I’m ready to start using the touchscreen.

Persistent Calibration

You’ll probably want your calibration to persist across reboots, so you’ll need to do a little more work now to make the settings permanent. First you’ll need to run the xinput_calibrator command from the terminal and then perform the calibration.

Listing 1

$ xinput_calibrator Calibrating EVDEV driver for "EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch® USB Touchmonitor Interface" id=9 current calibration values (from XInput): min_x=527, max_x=3579 and min_y=3478, max_y=603 Doing dynamic recalibration: Setting new calibration data: 527, 3577, 3465, 600 --> Making the calibration permanent <-- copy the snippet below into '/etc/X11/xorg.conf.d/99-calibration.conf' Section "InputClass" Identifier "calibration" MatchProduct "EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch® USB Touchmonitor Interface" Option "Calibration" "527 3577 3465 600" EndSection

Toward the bottom of the output you can see instructions for "Making the calibration permanent". This section will vary depending what xinput_calibrator detects about your system. In my case under Ubuntu the output was an xorg.conf.d snippet, which I then copied into the xorg.conf.d directory on my distribution. Be aware that even though the output says that xorg.conf.d should be located in /etc/X11, it might actually be located somewhere else like /usr/share/X11 on your distribution. Once you've found the xorg.conf.d directory you can use your favorite text editor (with root privileges) to create the 99-calibration.conf file inside of it. Now when you reboot, you should see that your calibration has stayed in effect.

If you have a reason to avoid using an xorg.conf.d file to store your calibrations, you can run xinput_calibrator with the --output-type xinput option/argument combo.

Listing 2

$ xinput_calibrator --output-type xinput Calibrating EVDEV driver for "EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch® USB Touchmonitor Interface" id=9 current calibration values (from XInput): min_x=184, max_x=3932 and min_y=184, max_y=3932 Doing dynamic recalibration: Setting new calibration data: 524, 3581, 3482, 591 --> Making the calibration permanent <-- Install the 'xinput' tool and copy the command(s) below in a script that starts with your X session xinput set-int-prop "EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch® USB Touchmonitor Interface" "Evdev Axis Calibration" 32 524 3581 3482 591

At the bottom of this output you can see that there are instructions for using xinput to make your calibration persistent. If it's not already present, you'll need to install xinput and then copy the command line in xinput_calibrator's instructions into a script that starts with your X session. You can usually also add it to your desktop manager's startup programs via something like gnome-session-properties if you would prefer.

Another option that might be of use to you is -v. The -v (--verbose) option displays extra output so that you can see more of what's going on behind the scenes. If you have any trouble getting your calibration to work, this would be a good place to start.

Your output will probably vary from what I have here depending on what type of hardware you have and which distribution you run. For instance, on Fedora 15 I get the xinput instructions by default instead of an xorg.conf.d snippet. Make sure that you run the above commands yourself, and don't copy the output from my listings.

If you have a desire or need to redo the calibration periodically, you might want to consider creating a wrapper script to automate the process of making the calibration permanent. Such a script might use sed to strip out the relevant code and then a simple echo statement to dump it into the correct xorg.conf.d file or startup script.

Wrapping Up

That concludes this Tech Tip. Have a look at for other tips, projects, how-tos, and service offerings available from Innovations Technology Solutions. Thanks, and stay tuned for more from Innovations.



  1. xinput-calibrator Page
  2. xinput-calibrator On Github
  3. Page
  4. Bodhi Linux Website
  5. Ubuntu Linux Website
  6. Fedora Linux Website

Video Tip – Finding Open IP Addresses


Welcome, this is an Innovations Tech Tip. In this tip we’re going to explore a couple of ways to find open IP (Internet Protocol) addresses on your network. You might need this information if you were going to temporarily set a static IP address for a host. Even after you’ve found an open IP though, you still need to take care to avoid IP conflicts if your network uses DHCP (Dynamic Host Configuration Protocol). Please also be aware that one of these techniques uses the nmap network scanning program, which may be against policy in some environments. Even if it’s not against corporate policy, the nmap man page states that “there are administrators who become upset and may complain when their system is scanned. Thus, it is often advisable to request permission before doing even a light scan of a network.”2





The first technique that we’re going to cover is the use of the arping command to tell if a single address is in use. arping uses ARP (Address Resolution Protocol) instead of ICMP (Internet Control Message Protocol) packets. The reason this is significant is because many firewalls will block ICMP traffic as a security measure. So when using ICMP you’re never sure whether the host is really down, or if it’s just blocking your pings. ARP pings will almost always work because ARP packets are used to provide the critical network function of resolving IP addresses to MAC (Media Access Control) addresses. Hosts on an Ethernet network will use these resolved MAC addresses to communicate instead of IPs. Be aware that one case in which ARP pings will not work is when you’re not on the same subnet as the host you’re trying to ping. This is because ARP packets are not routed. See Resource #3 below for more details.

arping has several options, but the three that we’ll be focusing on here are -I, -D, and -c . The -I option specifies the network interface that you want to use. In many cases you might use eth0 as your interface, but I’m using a laptop connected via wireless and my interface is wlan0 . The -D option checks the specified address in DAD (Duplicate Address Detection) mode. Let’s look at an example.

Listing 1

$ arping -I wlan0 -D ARPING from wlan0 Unicast reply from [D4:4D:D7:64:C6:5F] for [D4:4D:D7:64:C6:5F] 2.094ms Sent 1 probes (1 broadcast(s)) Received 1 response(s)

You can see that I’m pinging (a known router) with the -D option. If no replies are received DAD mode is considered to have succeeded, and you can be reasonably sure that address is free for use. Listing 2 shows an example of what you would see if the address is not in use.

Listing 2

$ arping -I wlan0 -c 5 -D ARPING from wlan0 Sent 5 probes (5 broadcast(s)) Received 0 response(s)

Here I’ve picked a different network address that I knew would be unused. I’ve also added the -c option mentioned above so that I could have arping stop after sending 5 requests. Otherwise arping would keep trying until I interrupted it (possibly via the Ctrl-C key combo).

Armed with this information and a knowledge of any dynamic addressing scheme on my network, I can set a temporary static IP for a host. See Resource #1 for more information on arping.


nmap, which stands for “Network MAPper”, was “designed to rapidly scan large networks…to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics.”2 We’ll be using this to find all of the currently used IP addresses on the network.

nmap has many options and is a very deep utility, and I highly suggest spending some time reading its man page. Of all these options, the only one that we’ll be dealing with in this quick tech tip is -e. The -e option allows you to specify the interface to use when scanning the network. This is similar to the -I option of arping. The example below shows a simple usage.

Listing 3

$ nmap -e wlan0 Starting Nmap 5.21 ( ) at 2011-08-23 11:13 EDT Nmap scan report for Host is up (0.033s latency). Not shown: 996 closed ports PORT STATE SERVICE 23/tcp open telnet 53/tcp open domain 80/tcp open http 5000/tcp open upnp Nmap scan report for Host is up (0.00015s latency). Not shown: 997 closed ports PORT STATE SERVICE 111/tcp open rpcbind 5900/tcp open vnc 8080/tcp open http-proxy Nmap scan report for Host is up (0.033s latency). Not shown: 995 closed ports PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 2049/tcp open nfs Nmap done: 256 IP addresses (3 hosts up) scanned in 4.22 seconds

The first thing to notice is the notation that I used to specify the network submask (/24). If you’re unfamiliar with this notation, please see Resource #5 below. The next thing to notice is that nmap gives us a lot more information than just what IPs are in use. nmap also shows us things like what ports are open on each host, and what service it thinks is running on each port. As a network administrator you can use this information to get a quick overview of your network, or you can dig deeper into nmap to perform in-depth network audits. In our case we’re just looking for an open IP address to use temporarily, so we can choose one that’s not listed. Again, care needs to be taken when statically setting IPs on a network with DHCP. Have a look at Resource #4 for a more comprehensive guide to using nmap.

That concludes this Tech Tip. Have a look at for other tips, tricks, how-tos, and service offerings available from Innovations Technology Solutions. Thanks, and stay tuned for more from Innovations.


  1. man arping
  2. man nmap
  3. – Gerard Beekmans – Ping: ICMP vs. ARP
  4. Network Uptime – James Messer – Secrets of Network Cartography: A Comprehensive Guide to nmap
  5. – Bradley Mitchell – CIDR – Classless Inter-Domain Routing

Video Tip – Using Pipes With The sudo Command


Welcome, this is an Innovations Tech Tip. In this tip we’re going to cover how to run a command sequence, such as a pipeline, using sudo which is sometimes also pronounced “pseudo”. It may be tempting to think of the “su” in sudo as standing for “super user” since, especially if you’re an Ubuntu user, you normally use sudo to execute things as root. Something that may surprise you though is that you can use the -u option of sudo to specify a user to run the command as. This is assuming that you have the proper privileges. Have a look at the sudo man and info pages for more interesting options.




Now, if you’ve ever tried to use sudo to run a command sequence such as a pipeline, where each step required superuser priveleges, you probably got a Permission denied error. This is because sudo only applies to the first command in the sequence and not the others. There are multiple ways to handle this, but there are two that stand out to me. First, you can use sudo to start a shell (such as bash) with root priveleges, and then give that shell the command string. This can be done using the -c option of bash. To illustrate how this works, I’ll start out using sudo to run cat on a file that I created in the /root directory that I normally wouldn’t have access to.

Listing 1

$ cat /root/example.txt cat: /root/example.txt: Permission denied $ sudo cat /root/example.txt [sudo] password for jwright: You won't see this text without sudo.

If I try to use sudo with a pipeline to make a compressed backup of the /root/example.txt file, I again get the Permission denied error.

Listing 2

$ sudo cat /root/example.txt | gzip > /root/example.gz -bash: /root/example.gz: Permission denied

Notice that it’s the second command (the gzip command) in the pipeline that causes the error. That’s where our technique of using bash with the -c option comes in.

Listing 3

$ sudo bash -c 'cat /root/example.txt | gzip > /root/example.gz' $ sudo ls /root/example.gz /root/example.gz

We can see form the ls command’s output that the compressed file creation succeeded.

The second method is similar to the first in that we’re passing a command string to bash, but we’re doing it in a pipeline via sudo.

Listing 4

$ sudo rm /root/example.gz $ echo "cat /root/example.txt | gzip > /root/example.gz" | sudo bash $ sudo ls /root/example.gz /root/example.gz

Either method works, it’s just a matter of personal preference on which one to use.

That concludes this Tech Tip. Have a look at for other tips, tricks, how-tos, and service offerings available from Innovations Technology Solutions. Thanks, and stay tuned for more quick tips from Innovations.


  1. man bash
  2. man sudo
  3. The Ink Wells – James Cook
  4. Linux Journal – Don Marti – Running Complex Commands with sudo
  5. bash Cookbook – Albing, Vossen, Newham

Writing Better Shell Scripts – Part 3

Quick Start

This post doesn’t really lend itself to being a quick read, but you can have a look at the How-To section of this post and skip the rest if you’re in a hurry. I would highly recommend reading everything though, since there’s a lot of information that may serve you well in the future. There is also Video attached to this post that may be a good quick reference for you. Don’t forget that the man and info pages of your Linux/Unix installation can be an invaluable resource as well when you’re trying to learn new concepts and solve problems.





To make things easier on you, all of the black command line and script areas are set up so that you can copy the text from them. This does make using the commands and scripts easier, but if you’re not already familiar with the concepts presented here, typing the commands/code yourself and working through why you’re typing them will help you learn more. If you hit problems along the way, take a look at the Troubleshooting section near the end of this post for help.

There are formatting conventions that are used throughout this post that you should be aware of. The following is a list outlining the color and font formats used.

Command Name or Directory Path
Warning or Error
Command Line Snippet With Commands/Options/Arguments
Command Options and Their Arguments Only


There is no way for me to cover all of the issues surrounding shell script security in a single blog post. My goal with this post is to help you avoid some of the most common security holes that are often found in shell scripts. No script can be un-crackable, but you can make the cracker’s task more challenging by following a few guidelines. A secondary goal with this post is to make you more savvy about the scripts that you obtain to run on your systems. With the fact that scripts written for the BASH and SH shells are so portable in the Linux/Unix world, it can be easy for a cracker to write malware that will run on many different systems. Having some knowledge about the security issues surrounding shell scripts might just keep you from installing/running a malicious script such as a trojan, which gives the cracker a back door to your system. The Resources section holds books and links which will allow you to delve more deeply into this topic if you’re looking for more comprehensive knowledge. Listing 1 shows an example script that contains some of the security problems that we’ll talk about in this post.

Listing 1

#!/bin/bash # A SUID root script that demonstrates various security problems # Prepend the current path onto the PATH variable PATH=.:${PATH} #Count the number of lines in a listing of the current directory ls | wc -l # Get user input read USR_INPUT # Check to see if the user supplied the right password if [ $USR_INPUT == "mypassword" ];then echo "User input was $USR_INPUT and should have matched the string 'mypassword'" fi # Create a temp file touch /tmp/mytempfile # Set the temp file so that only the owner can read/write/execute the contents chmod 0700 /tmp/mytempfile # Save the password that the user supplied to the temp file echo $USR_INPUT > /tmp/mytempfile

Environment Variables

Your shell script has little to no chance of running securely if it trusts the environment that it runs in, and that environment has been compromised. You can help protect your script from unintended behavior by not trusting items like environment variables. Whenever possible, assume that input from the external environment has been designed to cause your script problems.

The PATH variable is a common source of security holes in scripts. Two of the most common issues are the inclusion of the current directory (via the . character) in the path, and using a PATH variable that’s been manipulated by a cracker. The reason that you don’t want the current directory included in the path is that a malicious version of a command like ls could have been placed in your current directory. For example, lets say your current directory is /tmp which is world writable. A cracker has written a script named ls and placed it in /tmp as well. Since you have the current directory at the front of your PATH variable in Listing 1, the malicious version of ls will be run instead of the normal system version. If the cracker wanted to help cover their tracks, they could run the real version of ls before or after running their own code. Listing 2 shows a very simple script that could replace the system’s ls command in this case.

Listing 2

#!/bin/bash # Run the real ls with the original arguments/options to cover our tracks /bin/ls "$@" # Run whatever malicious code we want here echo "Malicious code"

There’s a decent chance that any cracker who planted the fake ls would create it in such a way that it would look like ls was running normally. This is what I’ve done in Listing 2 by passing the @ variable to the real ls command so that the user doesn’t suspect anything. This brings up another point besides the use of the current directory in the path. Just because your script seems to be running fine from the user’s point-of-view doesn’t mean that it hasn’t been compromised. A good cracker knows how to cover their tracks, so if a security flaw has been exploited in your script the breach may go undetected for an indefinite period of time.

You can see in Listing 1 that the order of directories in the PATH variable makes a difference. This is important because if a cracker has write access to a directory that’s earlier in the search order, they can preempt the standard directories like /bin and /usr/bin that may be harder to gain access to. When you try to run the standard command, the malicious version will be found first and run instead. All the cracker has to do is insert a replacement command, like the one in Listing 2, earlier in the path search order.

The second main problem with the PATH environment variable is that it could have been manipulated by a cracker before, or as your script was run. If this happens, the cracker could point your script to a directory that they created which holds modified versions of the system utilities that your script relies on. Knowing this, it’s best if you add code to the top of your script to set the PATH variable to the minimal value your script needs to run. You can save the original PATH variable and restore it on exit. Listing 3 shows Listing 1 with the current directory removed from the PATH variable, and a minimal path set to lessen the chances of problems. Keep in mind though that a cracker could have compromised the actual system utilities that are in locations such as /bin and /sbin. Ways to detect and combat this occurrence fall more into the system security realm though and won’t be talked about in this post.

Listing 3

#!/bin/bash # A SUID root script that demonstrates various security problems # Save the current path variable to restore it later OLDPATH=${PATH} # Set a minimal path for our script to use PATH=/bin:/usr/bin #Count the number of lines ls | wc -l # Get user input read USR_INPUT # Check to see if the user supplied the right password if [ $USR_INPUT == "mypassword" ];then echo "User input was $USR_INPUT and should have matched the string 'test'" fi # Create a temp file touch /tmp/mytempfile # Set the temp file so that only we can read/write the contents chmod 0700 /tmp/mytempfile # Save the password that the user supplied to the temp file echo $USR_INPUT > /tmp/mytempfile # Reset the PATH variable to its original value PATH="$OLDPATH"

In your own scripts it would probably be best to put the reset of the PATH variable inside of a trap on the exit condition. That way PATH gets reset to the original value even if your script is terminated early. I wrote about traps in the last post in this series on error handling.

Another, less desirable way of avoiding malicious PATH exploits would be to use the full (absolute) path to the binary your script is trying to run. So, instead of just entering ls by itself, you would enter /bin/ls . This ensures that you’re running the binary that you want to, but it’s a more “brittle” approach. If your script is run on a system where the binary you are calling is in a different location, your script will break when the command is not found. One approach to help cut down on this drawback is to use the whereis command to locate the command for you. Caution needs to be applied with this approach too, but I’ve created an example in Listing 4 that shows how to do this. Remember that if the cracker has somehow compromised the system’s standard version of the command that you’re trying to run, this technique won’t help. That really starts being a system security problem rather than a script security problem at that point though.

Listing 4

#!/bin/bash - #File: # Attempt to find the command with the whereis command CMD=$(whereis $1 | cut -d " " -f 2) # Check to make sure that the command was found if [ -n "$CMD" ];then echo "$CMD" fi

The script uses the command name to give the user the full path to the binary, if it can be found.There are of course numerous improvements that you could make to the script in Listing 4. My main suggestion would be to rewrite the script as a function, and then put that inside a script that you can source. That way you maximize code reuse throughout the rest of your scripts. I’ve done this in Listing 29 via the run_cmd function.

Another environment variable that can be problematic is IFS. IFS stands for “Internal Field Separator” and is the variable that the shell uses when it breaks strings down into fields, words, and so on. It can actually be a handy variable to manipulate when you’re doing things like using a for loop to deal with a string that has odd separator characters. If your shell inherits the IFS variable for it’s environment, a cracker can insert a character or characters that will make your script behave in an unexpected way. For example, suppose I have a few scripts in my ~/bin directory that I want to run together (or nearly together). The script in Listing 5 shows one very simple way of doing this.

Listing 5

#!/bin/bash - BINS="/home/jwright/bin/ /home/jwright/bin/" for BIN in $BINS do echo $($BIN) done

When I run the script I get the output from and that I expect. In this case the scripts just output their name and exit. Everything is fine until a cracker comes along and sets the IFS variable to a forward slash (/). Now when I run my script I get the output in Listing 6.

Listing 6

$ ./ ./ line 8: home: command not found ./ line 8: jwright: command not found ./ line 8: bin: command not found ./ line 8: : command not found ./ line 8: home: command not found ./ line 8: jwright: command not found ./ line 8: bin: command not found executing

Notice that since the directory /home/jwright/bin is in my path, the call should have run. If you look closely though you’ll see that there is a space after the filename, which causes the command to not be found. The IFS variable change has not only broken my script, it has allowed the cracker to open up a significant security hole. If the cracker creates a program or script with any of the names like home, jwright, or bin anywhere in the directories in PATH, their code will be executed with the privileges of my script. Because of the privilege issue, this security hole is an even bigger problem with SUID root scripts.

On some Linux distributions, the IFS variable is not inherited by a script and instead a default standard IFS value is used. You can still change the value of IFS within your script thought. With this said, it’s still a good idea to set the IFS variable to a known value at the beginning of your script and restore it before your script exits. This is similar to the change we made in Listing 3 to store and reset the PATH variable. This is a good idea because even though the distribution that your developing your script on may not allow IFS inheritance, your script may be moved to another distribution that does. It’s best to be safe and always set IFS to a known value.

Make sure that you never use the UID, USER, and HOME environment variables to do authentication. It’s too easy for a cracker to modify the values of these variables to give themselves elevated privileges. Now on the Fedora system that I’m using to write this blog post the UID variable is readonly, so I can’t change it. That doesn’t guarantee that every system that your script runs on will make UID readonly though. Err on the side of caution and use the id command or another mechanism to authenticate users instead of variables. The id command is very useful, and can give you information like effective user ID, real user ID, username, etc. Listing 7 is a quick reference of some of the id command’s options.

Listing 7

-g (--group) Print only the effective group ID -n (--name) Print a name instead of a number, for -ugG -r (--real) Print the real ID instead of the effective ID, with -ugG -u (--user) Print only the effective user ID -Z (--context) Print only the security context of the current user (SELinux)

You’ll need to use the options -u and -g with some of the other options (-r and -n) so that the id command knows whether you want information on the user or group. For example you would use /usr/bin/id -u -n to get the name of the user instead of their user ID.

The fact that the UID variable is set to readonly on my system gives you a hint at how to protect some variables. There is actually a command named readonly that sets variables to a readonly state. This does protect variables from being changed, but it also keeps you as the “owner” of the variable from making any changes to it too. You can’t even unset a readonly variable. To make a variable readonly, you would issue a command line like readonly MYVAR . Make sure to carefully evaluate whether or not a variable will ever need to change or be unset before setting it to readonly.

There’s an IBM developerWorks article in the Resources section (#20) that mentions security implications for some other environment variables such as LD_LIBRARY_PATH and LD_PRELOAD. That would be a good place to start digging a little deeper on the security issues surrounding environment variables.

Symbolic Links

You should always check symbolic links to make sure that a cracker is not redirecting you to their modified code. Symbolic links are a transparent part of everyday life for Linux users. Chances are that when you run sh on your favorite Linux distribution, /bin/sh is actually a link to /bin/bash . Go ahead and run ls -l /bin/sh if you’ve never noticed this before. Symbolic link attacks can take a few different forms, one of which is redirection of sensitive data. In one situation, you may think that you’re caching sensitive data to a file you’ve created in /tmp with 0700 file permissions. Instead, by exploiting a race condition in your script (we’ll talk about race conditions later) a cracker creates a symbolic link with the same filename that your script will be writing data into first, thus causing your creation of the temporary file to throw an error. If your script doesn’t stop on this error, it will begin dumping data into the file at the end of the symbolic link. The endpoint of the link could be on a mounted remote filesystem where the cracker can get easier access to it. There were several mistakes made in this scenario that we’ll talk more about later, but before that lets look at making sure we’re not writing data to a symbolic link.

Listing 8

#!/bin/bash - #File: # Poor method of temp file creation touch /tmp/mytempfile # Check the new temp file to see if it's a symbolic link IS_LINK=$(file /tmp/mytempfile | grep -i "symbolic link") # If the variable is not null, then we've detected a symbolic link if [ -n "$IS_LINK" ];then echo "Possible symbolic link exploit detected. Exiting." exit 1 #Exit before we dump the sensitive data into to the link fi # Dump our sensitive data into the temp file echo "Sensitive Data" > /tmp/mytempfile

If our script sees the string “symbolic link” in the output from the file command, it assumes that it’s looking at an attempted symbolic link exploit. Rather than continuing on and possibly sending data to a cracker, the script chooses to warn the user and exit with an exit status indicating an error. Be aware thought that this script doesn’t protect against the situation where a cracker creates your temp file in place with permissions to give themselves access to the data. In the case that you don’t expect a the temp file to already be there, you would throw an error and exit. This brings up another problem though – DoS (Denial of Service) attacks. If the cracker simply wants your script to fail, all they have to do is make sure your temp file has already been created so that your script will throw and error and exit. You’re not handing over sensitive data, but your users are being denied the use of your script. The answer to this is to create temporary files with less-predictable file names.

“Safe” Temporary Files

In the header for this section, I put the word safe in quotes to denote that it’s very difficult to make anything completely safe. What you have to do is make things as safe as possible, and then keep an eye out for suspicious activity. In the last blog post I created a a function named create_temp that used a simple, but risky mechanism to create temp files. A snippet of the code from that listing is shown in Listing 9.

Listing 9

# Function to create "safe" temporary files function create_temp { # Give preference to user tmp directory for security if [ -e "$HOME/tmp" ] then TEMP_DIR="$HOME/tmp" else TEMP_DIR="/tmp" fi # Construct a "safe" temp file name TEMP_FILE="$TEMP_DIR"/"$PROGNAME".$$.$RANDOM # Keep the file in an array to remove it later TEMPFILES+=( "$TEMP_FILE" ) { touch $TEMP_FILE &> /dev/null } || fatal_err $LINENO "Could not create temp file $TEMP_FILE" }

The problem with this function is that it uses a temporary file name with 2 elements that are easy to predict – the program name and the process ID. The fact that there is a random number on the end is only an inconvenience for the cracker, because all they have to do is create a file for each possible file name with an ending number between 0 and 32767. They can be sure that you’ll dump data into one of those files, and it’s easy to write a script to find out which file holds the data. A slightly better method would be to append multiple sets of random numbers onto the file name, separating each set with periods. This makes it much harder for the cracker to cover all the possible file names. A much better way to handle this situation is to use the mktemp command, which is available on most Linux systems.

The mktemp command takes a string template that you supply and creates a unique temporary file name. The form could be something like mktemp /tmp/test.XXXXXXXXXXXX which would print the random file name to standard out and create a file with that name and path. Running that command line on a Fedora 13 system once gave me the output /tmp/test.o0mTLAgSWTfX which of course will vary each time you run the command. The more X characters you add to the template, the harder it is for a cracker to predict the file name. From what I’ve read, 10 or so is the recommended minimum amount. Another nice thing about mktemp is that when it creates a temp file, it makes sure that only the owner has access to it. Some useful options for mktemp are shown in Listing 10. You should use mktemp in preference to commands like touch and echo to create temp files.

Listing 10

-d (--directory) Create a directory, not a file. -q (--quiet) Suppress diagnostics about file/dir-creation failure. --suffix=SUFF Append SUFF to TEMPLATE. SUFF must not contain slash. This option is implied if TEMPLATE does not end in X. --tmpdir[=DIR] Interpret TEMPLATE relative to DIR. If DIR is not specified, use $TMPDIR if set, else /tmp. With this option, TEMPLATE must not be an absolute name. Unlike with -t, TEMPLATE may contain slashes, but mktemp creates only the final component.

There are just a few other miscellaneous facts about mktemp that I want to make sure you’re aware of.

  1. The man pages for mktemp on both Ubuntu 9.10 and Fedora 13 systems specify that the minimum number of X characters that you can have in a template is three. Even though you can go this low, I wouldn’t recommend it because it greatly increases the predictability of your file names. Ten or more random alpha-numeric characters is better.
  2. mktemp is commonly part of the coreutils package.
  3. The default number of X characters that you get when you don’t specify a template with mktemp is 10. This held true on the Fedora 13 and Ubuntu 9.10 systems that I tested.

So what happens if you don’t have mktemp on your system? The article in the Resources section (#17) gives a way to use mkdir to create a temporary directory that only the creator has access to. A script based on the examples in that article is found in Listing 11, but should not be used in preference to the mktemp command unless you have a compelling reason.

Listing 11

#!/bin/bash - #File # Give preference to user tmp directory for security if [ -e "$HOME/tmp" ];then TEMP_DIR="$HOME/tmp" else TEMP_DIR="/tmp" fi # Create somewhat secure directory name TEMP_NAME=${TEMP_DIR}/$$.$RANDOM.$RANDOM.$RANDOM # Create the directory while at the same time giving # only the user access to it (umask 077 && mkdir $TEMP_NAME) || { echo "Error creating the temporary directory." exit 1 }

Notice that this script does use multiple references of the RANDOM variable separated by periods to make the directory name harder to guess. Also, the umask is set to 077 just before the directory is created so that the end directory permissions are 700. That gives the owner full access to the file, but none to anyone else. At the top of the script I have reused code from the create_temp function in Listing 9. This code gives preference to the user’s home directory over the system (/tmp) directory. If the temporary file or directory that you are creating can be placed in the user’s home directory, that’s just one more layer of protection from prying eyes. I would suggest using the user’s own tmp directory whenever possible.

Keep in mind that as I mentioned above, even though you’ve protected the data in the temp files a cracker can still launch a DoS (Denial of Serivce) attack against your script. In this case since the cracker probably can’t guess the temporary file name, they might try to fill the /tmp directory so that there’s no more space for you to create your file. Things like user disk quotas can help mitigate this type of attack though.

Now that you know a little more about temp file safety, I’ll caution you not to overuse temporary files. When you store or use data in external files you are opening a door into your script that a knowledgeable individual may be able to exploit. Use temp files only when needed, and make sure to consistently follow safe guidelines for their use.

Race Conditions

A race condition occurs when a cracker has a window of opportunity to preempt and modify your scripts behavior, usually by exploiting a design flaw in the execution sequence of your script, or in its reliance on an external resource (like a lock file). The example that we’ve already talked about is creating a symbolic link or a file in place of the script’s temp file to capture data. The script that I’ve created in Listing 12 uses the sleep command to create a larger window for a race condition.

Listing 12

#!/bin/bash - #File: TEMP_FILE=/tmp/predictable_temp # Make sure that the temp file doesn't already exist if [ ! -f $TEMP_FILE ];then # Do something here that takes 10 seconds. This # creates the race condition and is simulated by # the sleep command sleep 10 # Create the temp file touch $TEMP_FILE # Make sure only the user can view the contents chmod 0700 $TEMP_FILE # Dump our sensitive data to the temp file echo "secretpassword" > $TEMP_FILE fi

Once the script is run, the cracker has 10 seconds to create the temp file before the script does. The timing is rarely as simple as I have made it out to be in this example, but the 10 second gap between checking for the existence of the file and the creation of it illustrates the point. The two lines in Listing 13 can be entered as a different user. The touch command in Listing 12 will fail because the file is owned by a different user, but the script has another flaw in that it doesn’t check for that error before writing the data. Because of this the sensitive data is written into a file that is easy for the cracker to read. Checking for an error and making sure that the file you want to create doesn’t already exist and has the correct permissions would go a long way toward making this script more secure.

Listing 13

touch /tmp/predictable_temp chmod 0777 /tmp/predictable_temp

When the 10 second delay expires in my script I get the error chmod: changing permissions of `/tmp/predictable_temp': Operation not permitted just before the data is written to the file. The temp file is accessible to the cracker using the cat command, and an ls -l of the temp file shows that it’s owned by the user name that the cracker used. There are other race condition exploits, but the moral of the story is to not leave gaps between critical sections of your script. Listing 11 shows a good example of closing the gap between operations. In that case the permissions are set as the directory is created by setting the umask before the call to mkdir. Race conditions are certainly something to keep in mind as your attempt to increase the security of your scripts.

The Shebang Line

You may have noticed before that I put a dash and a space (a bare option) at the end of most of my script shebang lines. This is the same as the double dash (--) option and signals the end of all options for the shell. Any options that are tacked onto the end of the shebang line will be treated as arguments to the shell, and will most likely throw an error. The reason that this is important is that it prevents option spoofing. On some systems if the cracker can get the shebang line to effectively read #!/bin/sh -i they will get an interactive shell with the privileges of the script. It’s important to note that I was not able to get an interactive shell using a script on a Fedora 13 system, even when I entered the shebang line directly as having the -i option. Even so, you don’t always know which systems your script will run on, and it only takes a fraction of a second to add the dash (or double dash) at the end of your shebang line. That’s a very small price to pay for some added security.

User Input

As I discussed in the error handling post of this series, user input should be processed cautiously. Even when there is no malicious intent by a user very serious errors can result from incorrect input. At its worst, user input can give a cracker an open door into your system through things like injection attacks. Keeping this in mind, there are a few guidelines that you can follow to help keep user input from bringing your script down.

If you can avoid it, don’t pass user input to the eval command, or pipe the input into a shell binary. This is a script crash or security problem waiting to happen. Listing 14 shows the wrong way to handle user input when it’s captured with the read command.

Listing 14

#!/bin/bash - #File: # Get the input from the user read USR_INPUT # Don't use eval like this eval $USR_INPUT # Don't pipe input to a shell like this echo $USR_INPUT | sh

It’s probably pretty easy to agree with me that the script in Listing 14 is a bad idea. The user can type any command string they want (including rm -rf /*) and it will be executed with the privileges of the script. Depending on how much the permissions of the script are elevated, this could do a lot of damage. Another scenario that may seem more harmless is the one in Listing 15.

Listing 15

#!/bin/bash - read USR_INPUT if [ $USR_INPUT == "test" ];then echo "You should only see this if you typed test." fi

Everything works fine until a cracker enters the string random == random -o random and hits enter. What this effectively does is changes the if statement so that it reads if [ random == random -o random == "test" ] where the -o is a logical or. It tells the if statement that either the first statement or the second statement has to be true, but not both. Of course the first statement (random == random) is true, so what’s inside the if statement executes even though the cracker didn’t type the correct word or phrase. Depending on what’s inside the if statement, that security hole could range from a minor to major problem. The way to combat this is to quote your variables (i.e. "$USR_INPUT") so that they are tested as a whole string. In general quoting your variables is a good idea as you’ll also head off problems with things like spaces that might otherwise cause your script trouble.

This is an example of an injection attack where the cracker slips some extra information in with the input to trick your script into running unintended code. This is a very common attack “vector” for database and web servers where a cracker carefully crafts a request to cause arbitrary code to execute, or to bring down the web/database service. Another script that can be exploited by an injection attack is found in Listing 16.

Listing 16

#!/bin/bash - read USR_INPUT #This line contains the fatal security flaw echo ls $USR_INPUT >

This script isn’t necessarily something that you would do in the real world, but it’s a simple way to demonstrate this injection attack. What the script does is takes a list of directories from the user and then builds a script using the ls command to list the contents of the directories. The injection attack comes when a cracker types && rm randomfile and you find that the resulting script ( contains a line that will delete files (Listing 17). The && rm randomfile line could have just as easily be && rm -rf /* if the cracker wanted.

Listing 17

ls && rm randomfile

The && operator runs the second command in the sequence if the first command runs successfully (without an error). The ls command is not likely to fail by itself as it just lists the contents of the current directory, so the rm command will most likely run and delete files. The method to deal with this type of attack is similar to the previous method of quoting, except that in this case you escape the quotes around the user input to make sure that it is properly contained. Listing 18 shows the corrected script.

Listing 18

#!/bin/bash - read USR_INPUT #This line uses escaped quotes to enclose the potentially dangerous input echo ls ""$USR_INPUT"" >>

Along with quoting, it’s a good idea to search user input for unacceptable entries like meta or escape characters. You can search the user input for these undesirable characters and replace them with something harmless to your script like a blank character or underscore. When doing this, it may be easier to search for the characters that are acceptable instead of trying to cover every single character that’s not acceptable. The set of acceptable characters is almost always smaller, and it’s hard to anticipate every bad character that might be passed to your script. Listing 19 shows a simple way of cleaning the input using the tr command.

Listing 19

#!/bin/bash - #File: # Grab the user's input read USR_INPUT # Remove all characters that aren't alphanumeric or newline USR_INPUT=$(echo "$USR_INPUT" | tr -cd '[[:alnum:]n]')

This script takes the user input using the read command as before, but then pipes the value directly into the tr command. The tr command’s -c (--complement) and -d (--delete) options are used to cause tr to look for and delete the unmatched characters. So, anything that’s not an alphanumeric character (via the alnum character class) or a newline character will be deleted. It’s not hard to adapt the tr statement to your situation, maybe even replacing the characters instead of deleting them.

As with the other topics in this post I’m scratching the surface, but hopefully you can see how important it is to check user input before doing anything with it. The inability of a script or program to handle improper input is a common bug in the software world. Whether the user has malicious intent or not, bad user input is something that you must plan for.

SUID, SGID, and Scripts

There are several of the above scenarios that may not cause that much harm on their own because the user running the script has restricted permissions. This can all change with a script that has its SUID and/or SGID bit set though. The SUID and SGID bits show up in the first of four digits in the octal representation of a file’s permissions. The SUID bit has a value of 4 and the SGID bit has a value of 2. If both bits are set you get a value of 6, which is similar to how normal permission bits can be added. The other place that you normally see the SUID and SGID bits are in the symbolic permission string. There they show up as the character “s” in either the user execute permission space, or in the group execution space respectively. For example, if only the SUID bit was set on a script and the file had read/write/execute permissions of 755, the full permissions for the script would be 4755. The symbolic representation of this would be -rwsr-xr-x .

When the SUID bit is set on an executable, the file is run using the privileges of the file’s owner. In the same way if the SGID bit is set on an executable, it will be run with the rights of the file owner’s group. Typically a command/script executes using the real user ID (and rights), but when the SUID or SGID bits are set the script executes with the effective user ID of the file owner instead. A common use is to have the SUID bit set on a file that is owned by root so that a user can access files and resources that they normally wouldn’t have access to. The passwd command is a good example of this. In order to change a user’s password, passwd has to access protected files such as /etc/passwd and /etc/shadow . If a normal user is running the passwd command, they would need elevated privileges to access the files since they are only readable and writable by root. This is very handy, and as you’ve seen, sometimes required on Linux systems but is something that you should avoid doing with your scripts whenever possible. The problem with an SUID root script is that if a cracker compromises that script, they have superuser privileges that could be used to run commands like rm -rf /* . As a programmer and/or system administrator, you need to guard against the tendency to take the easiest route to a solution rather than the most secure one. All to many admins will set a script to be SUID root when with some thought the script could have been designed to run without superuser privileges. With that said, you may run into situations where you have to use SUID and SGID. Just make sure that it’s a true “have to” situation. Always follow the Rule of Least Privilege which says that you should never give a user or a program any more rights than you have to.

If you really need to use the SUID and SGID bits, you can set them with the chmod u+s FILENAME and chmod g+s FILENAME command lines respectively. Keep in mind that there are Linux distributions and Unix variants that do not honor the SUID bit when it is set on a script. You’ll need to check the documentation for your Linux distribution to be sure that setting the SUID bit will work.

You can use the find command to search for files on your system with the SUID and SGID bits set. You can use this as a security auditing tool to search for SUID/SGID scripts that look out of place. Listing 20 shows a quick and simple way to search for out of place SUID/SGID shell scripts that are on your system.

Listing 20

$ sudo find / -type f -user root -perm /4000 2> /dev/null | xargs file | grep "shell script" /usr/bin/ setuid Bourne-Again shell script text executable

Let’s take the command line from Listing 20 one step at a time. The first section is the actual find command (find / -type f -user root -perm +4000). The find command searches for a file of type regular file (-type f) and not a directory, it checks to make sure that the file is owned by root (-user root), and that it has the SUID bit set (-perm /4000). The next short section of 2> /dev/null redirects any errors to the null device so that they are thrown away. This effectively suppresses errors resulting from find trying to access things like Gnome’s virtual file system. The file command deciphers which type of file is being looked at. This command is not perfect, but will work for a quick and dirty security audit. The file command needs to work on each of the file names individually, so I use the xargs command to run file separately with each line of output from the find command. I could have also used the -exec option of find in the following way: -exec file '{}' ; . The command line up to this point gives me output telling what each type of file is, but I really only care about shell scripts. That’s where the grep statement comes in. I use grep to filter out only the lines that mention a “shell script”.

As you can see in the output of the command line, there is a suspicious file called in /usr/bin . Searching in this way made a file that normally would be overlooked stand out by itself. In this case I created that script and put it in /usr/bin myself so that I would have something to find, but it simulates something that you might find in the field. You could just as easily have searched for SGID scripts (-perm /2000), SUID/SGID combo scripts (-perm /6000), SUID root binaries, and much more. Be aware that if the owner execution bit is not set on a directory then it is not searchable. This would cause the find command to skip over the directory, possibly causing you to miss a suspicious file.

The SUID root mechanism can be especially dangerous if a cracker manages to make a copy of a shell binary and sets it to be SUID root. Some shells such as BASH will automatically relinquish their privileges if they’re being run this way. Keep an eye out for extra copies of shell binaries that are set SUID, as they could be part of an attack by a cracker. The shell binary could have been copied and modified using several of the security flaws that we’ve talked about above. You could use the script in Listing 20 to help you search for SUID root copies of shell binaries.

When running scripts manually as a system administrator, you should run scripts with temporary elevated privileges through a mechanism like sudo whenever possible, rather than setting a script to be SUID root. Even with sudo though you still need to make sure your script is secure as possible because sudo is still granting your script root privileges, and it doesn’t take much time to do a lot of damage. Item #16 in the Resources section touches on many of the security aspects that we’ve talked about here from the perspective of proper sudo usage.

In some cases a user may install or use your script improperly, running it as SUID root or with sudo. If you never want your script run as root, you could use the id command along with some text manipulation to warn the user and then exit. The script in Listing 21 shows one way of doing this.

Listing 21

#!/bin/bash - #File: # Check to see if we're running as root via sudo if [ $(/usr/bin/id -ur) -eq 0 ];then echo "This script cannot be run with sudo" exit 1 fi # Get the listing on this script INFO=$(ls -l $0) # Grab the permission at the SUID position PERM=$(echo "$INFO" | cut -d " " -f 1 | cut -c 4) # Grab the owner OWNER=$(echo "$INFO" | cut -d " " -f 3) # Check for the SUID bit and the owner of root if [ "$PERM" == "s" -a "$OWNER" == "root" ];then echo "This script cannot be run as SUID root" exit 1 fi

The script uses the id command to check the real user ID of the user, and if it’s 0 (root) then the script warns the user that the script is not supposed to be run with sudo or as root and exits. To check for the SUID root condition, I’ve taken a slightly more complicated route. I run the command line ls -l $0 which gives me a long listing for the script name (represented by $0) showing the symbolic permission string and the owner. I then extract the character in the permission string that would represent the SUID bit as an “s” if present so that I can check it. This is done with the cut -c 4 command line which extracts the fourth character. Once I have the SUID bit and the user, I just use an if statement to check to see if both the SUID bit is set and that the script is owned by root. If both of those conditions are true, I warn the user that the script can’t be SUID root and exit.

One of the nice things about the BASH shell is that if it detects that it has been run under the SUID root condition, it will automatically drop its superuser privileges. This is nice because even if an attacker is able to make a copy of the bash binary and set it as SUID root, it will not allow them to gain additional access to the system. Unfortunately, most crackers are going to know this and will try to make a copy of another shell like sh that doesn’t have this feature.

The last thing that I’ll mention about SUID root scripts is that I have seen it suggested by several system administrators that you should use Perl or C whenever you must use SUID root. There have been arguments for and against using Perl or C in place of shell scripting, and ultimately you must decide which you feel safer with. I’m not going to argue the point, but I will say that if you use unsafe practices when writing your Perl scripts or C programs, you’re going to end up no better off anyway. Take your time and make sure the code you write is as secure as you can make it. This is a rule to live by no matter what language you’re using.

Storing Sensitive Data In Scripts

This is just a bad idea, do your best to avoid it. If you store passwords in a script they’re just waiting to be found. Even if you set the permissions to 0700, the passwords will still be compromised if a cracker compromises your account. There’s also the risk that you might accidentally send the script to another user, and forget to scrub the passwords from it.

You should also not echo passwords as a user types them. Shoulder surfers could see the password as the user enters it if you have the shell set to echo user input. To avoid this in your script, you can use stty -echo as I have in the very simple example in Listing 22.

Listing 22

#!/bin/bash - # Turn echoing off stty -echo # Read the password from the user echo "Please enter a password: " read PASSWD # Turn echoing back on stty echo

Notice that only what the user types is suppressed and not the output from the echo command itself. This of course doesn’t protect the user from somebody watching what their fingers press on the keyboard, but there’s nothing that you as a programmer can do about that.

If you do end up storing passwords in your script or in files on your system, it would be a good idea to encrypt the information. You can encrypt passwords using the md5sum or sha*sum commands. You can pipe the password string straight into the command as with the line echo "secretpassword" | sha512sum . I would suggest writing a script that takes the password without echoing the input and converts it into an encrypted hash. Once you’ve encrypted the password this way it is never decrypted, you just encrypt the password given by the user and compare that to the stored password hash. That way the password is not out in clear text for a cracker to find. Granted, it’s still possible to crack encryption, but remember that no system is bulletproof and the goal is to make the crackers life as difficult as possible.

One habit that you should encourage with your users (and any system admins under you) is picking long and complex passwords. To ease the strain of having to remember a convoluted password, have users build passwords based on first letters and punctuation from a random phrase. For instance, the phrase “This is 1 fairly strong password, don’t you think Jeremy?” would reduce to “Ti1fsp,dytJ?”. The specific phrase doesn’t matter, but it should include a mix of numbers, letters (upper and lowercase), and symbols to be the most secure. Make sure that all of the symbols being used are acceptable for the system you’re choosing the password for though.

The shc Utility

The shc utility compiles a script in order to make it harder for a cracker to read its contents. This is especially useful if you find that you have to store passwords or other sensitive information inside of a script. Take note that I said “harder” and not “impossible” for a cracker to read. It’s been shown that shc compiled scripts can be reverse engineered to gain access to the contents. Remember that you should strive to make sure that your protection mechanisms are multi-layered. If you use shc to compile a script with passwords in it, encrypt the passwords with the md5sum command, and set the access permissions to be as restrictive as possible. That way you’re not just relying on shc to keep your data safe. Some of the options for the shc utility are shown in Listing 23.

Listing 23

-e date The date after which the script will refuse to run (dd/mm/yyyy) -f script_name The file name of the script to compile -m message The message that will be displayed after the expiration date -T Allow the binary form of the script to be traceable -v Verbose output

Using these options I compiled a sample script via the command line in Listing 24, looked at what files were created, and then tried to run the resulting binary. The version of shc that I used was 3.8.7 which I compiled from source. I then copied the shc binary to my ~/bin directory so that I could run it more conveniently.

Listing 24

$ shc -e 08/09/2010 -m "Please contact your administrator" -v -f shc shll=bash shc [-i]=-c shc [-x]=exec '%s' "$@" shc [-l]= shc opts=- : No real one. Removing opts shc opts= shc: cc -o shc: strip shc: chmod go-r $ ls $ ./ ./ has expired! Please contact your administrator

You can see in Listing 24 that I’ve set an expiration date of September 8th, 2010, which is earlier than the date that I’m writing this. I supply the expiration message of “Please contact your administrator”, I ask shc for verbose output, and then I give it the script that I want it to compile ( When I list the files in the directory I see,, and . is the compiled binary that shc creates from my original script. is the C source code that is generated for . Be careful to keep this file in a safe place as it gives critical information that will compromise your compiled script. In Listing 24 I get an error when I try to run the compiled script (, but this is expected as I used an expiration date in the past. I did this just to show you how the compiled script would react when the expiration period expires. You don’t have to specify the expiration date, but it can be handy if you only want to give a user access to a script’s capabilities for a few days or weeks.

Overall shc is a nice tool to have at your disposal, but as I mentioned above don’t count on it for foolproof protection. The Linux Journal article in the Resources section (#5) talks about how shc compiled scripts can be cracked. Additional features have been added to newer versions of shc, such as the removal of group and other read permissions by default, to make the compiled scripts harder to get at. Even so, make sure that you have multiple layers of security surrounding your scripts as we’ve talked about earlier.


At this point, let’s take what we’ve discussed so far and apply it to the script in Listing 1. I’ve already removed the current directory from the PATH variable, and made sure that we start off with a clean path by resetting the variable in Listing 3. The script in Listing 25 shows the script that we’ll be starting with.

Listing 25

#!/bin/bash # A SUID root script that demonstrates various security problems # Save the current path variable to restore it later OLDPATH=${PATH} # Set a minimal path for our script to use PATH=/bin:/usr/bin #Count the number of lines ls | wc -l # Get user input read USR_INPUT # Check to see if the user supplied the right password if [ $USR_INPUT == "mypassword" ];then echo "User input was $USR_INPUT and should have matched the string 'test'" fi # Create a temp file touch /tmp/mytempfile # Set the temp file so that only we can read/write the contents chmod 0700 /tmp/mytempfile # Save the password that the user supplied to the temp file echo $USR_INPUT > /tmp/mytempfile # Reset the PATH variable to its original value PATH="$OLDPATH"

Now that we have a minimal and known PATH variable set, we can feel a little better about running the ls | wc -l command line. As stated before, we could use absolute paths for each command but that could lead to a portability issue on some systems where the binaries are stored in different locations.

The next step is to deal with the user input. I’m first going to put quotes around the variable to help ensure that it’s treated as a string, and not a part of the statement. Also, just after the read line I’m going to scrub the input to make sure there aren’t any inappropriate characters contained within it. Listing 26 shows the script with these changes.

Listing 26

#!/bin/bash # A SUID root script that demonstrates various security problems # Save the current path variable to restore it later OLDPATH=${PATH} # Set a minimal path for our script to use PATH=/bin:/usr/bin #Count the number of lines ls | wc -l # Get user input read USR_INPUT # Remove all characters that aren't alphanumeric or newline USR_INPUT=$(echo "$USR_INPUT" | tr -cd '[[:alnum:]n]') # Check to see if the user supplied the right password if [ "$USR_INPUT" == "mypassword" ];then echo "User input was $USR_INPUT and should have matched the string 'mypassword'" fi # Create a temp file touch /tmp/mytempfile # Set the temp file so that only we can read/write the contents chmod 0700 /tmp/mytempfile # Save the password that the user supplied to the temp file echo $USR_INPUT > /tmp/mytempfile # Reset the PATH variable to its original value PATH="$OLDPATH"

The section of code that scrubs the user input is taken from Listing 19, and a full explanation of the process can be found in the paragraphs following that listing. In short, the user input is echoed into the tr command so that all characters except alpha-numeric and newline characters are deleted.

Of course as I mentioned above, you wouldn’t want to store any password information in a script unless you have to. If it becomes necessary to store a password inside a script it’s best to encrypt the password using a command like md5sum. Think about this decision carefully because there is almost always a way to avoid storing a password inside of a script. For the purpose of this example, I’ve decided to leave the password in the file and use md5sum to encrypt it. Listing 27 shows the results of adding password encryption.

Listing 27

#!/bin/bash # A SUID root script that demonstrates various security problems # Create the array that will keep the list of temp files TEMPFILES=( ) # Function to create "safe" temporary files. function create_temp { # Give preference to user tmp directory for security if [ -e "$HOME/tmp" ] then TEMP_DIR="$HOME/tmp" else TEMP_DIR="/tmp" fi # Construct a "safe" temp file using mktemp TEMP_FILE=$(mktemp --tmpdir=$TEMP_DIR XXXXXXXXXX) # Keep the file in an array to remove it later TEMPFILES+=( "$TEMP_FILE" ) } # Save the current path variable to restore it later OLDPATH=${PATH} # Set a minimal path for our script to use PATH=/bin:/usr/bin #Count the number of lines ls | wc -l # Make sure that nobody can see the password as it's entered stty -echo # Get user input read USR_INPUT # Re-enable echoing of typed input stty echo # Remove all characters that aren't alphanumeric or newline USR_INPUT=$(echo "$USR_INPUT" | tr -cd '[[:alnum:]n]') # Check to see if the user supplied the right password, but use encryption if [ $(echo "$USR_INPUT" | md5sum | cut -d " " -f 1) == "d84c7934a7a786d26da3d34d5f7c6c86" ];then # Don't echo the user's password, just tell them it worked echo "Password Accepted." fi # Call the function that will create a "safe" temp file for us create_temp # Make sure that the temp file/name was added to the array echo ${TEMPFILES[0]} # Reset the PATH variable to its original value PATH="$OLDPATH"

Next, we start getting into the temporary file section of the script. I had created a function for this in the last blog post, but we’ll write the function from scratch here applying what we’ve learned so far. Listing 28 shows the new function and it’s implementation within the script.

Listing 28

#!/bin/bash # A SUID root script that demonstrates various security problems # Create the array that will keep the list of temp files TEMPFILES=( ) # Function to create "safe" temporary files. function create_temp { # Give preference to user tmp directory for security if [ -e "$HOME/tmp" ] then TEMP_DIR="$HOME/tmp" else TEMP_DIR="/tmp" fi # Construct a "safe" temp file using mktemp TEMP_FILE=$(mktemp --tmpdir=$TEMP_DIR XXXXXXXXXX) # Keep the file in an array to remove it later TEMPFILES+=( "$TEMP_FILE" ) } # Save the current path variable to restore it later OLDPATH=${PATH} # Set a minimal path for our script to use PATH=/bin:/usr/bin #Count the number of lines ls | wc -l # Make sure that nobody can see the password as it's entered stty -echo # Get user input read USR_INPUT # Re-enable echoing of typed input stty echo # Remove all characters that aren't alphanumeric or newline USR_INPUT=$(echo "$USR_INPUT" | tr -cd '[[:alnum:]n]') # Check to see if the user supplied the right password, but use encryption if [ $(echo "$USR_INPUT" | md5sum | cut -d " " -f 1) == "d84c7934a7a786d26da3d34d5f7c6c86" ];then # Don't echo the user's password, just tell them it worked echo "Password Accepted." fi # Call the function that will create a "safe" temp file for us create_temp # Make sure that the temp file/name was added to the array echo ${TEMPFILES[0]} # Reset the PATH variable to its original value PATH="$OLDPATH"

Within the create_temp function, I use the TEMPFILES array to hold the file names and paths of the temporary files that I create. That way I can remove them later when the script is finished. Normally I would add a trap to handle this which I talked about in the last blog post on error handling. I left the trap out of Listing 28 just to keep the example a little bit shorter. When the create_temp function is called, the script first checks to see if the user has their own tmp directory. If they do, it is used in preference to the main /tmp directory since it is world writable. Once the tmp folder has been selected it is passed to the mktemp command using the --tmpdir option. mktemp creates the temp file, and the pathname of the file that was created is stored in a variable. According to our error handling knowledge, I should be checking to make sure that the temp file was created and that there were no errors, but I’ve left this check out to keep the script more streamlined. In your own use of this script code you’ll want to apply the error handling techniques that we talked about in the last post. The path and file name that’s stored in the variable is then added to the TEMPFILES array to be dealt with later. Once that’s done, the temp file is ready for use. Normally you would redirect data into the temp file, but I just echoed the path and name of the temp file instead.

The last thing that I do is to restore the PATH variable using the saved value in the OLDPATH variable. This undoes the change that we made at the beginning of the script which helped us run system commands more safely.

There are still improvements that can be made to this script based on what has been discussed in previous posts. Please add your ideas about the script in the comments on this post.

Tips and Tricks

  • Never copy commands or code from forums, blogs, and the like without checking them to make sure they’re safe. Resource #2 has a list of malicious commands that have been given out by problem users in the Ubuntu Forums. Your best defense is to review the commands/code thoroughly yourself, or find someone who can review it for you before you execute it. You can also post the code to other forums and ask the users there if it’s safe.
  • Always be suspicious of external inputs to your script whether they be variables, user input, or anything else. We talked about validating user input in the last post on error handling as well as this one. It’s important to remember that incorrect input is not always the doing of a cracker. Many times users make honest mistakes, and your script needs to be able to handle that eventuality.
  • Make sure that your script is writable only by the owner. That makes direct code injection attacks harder for a cracker to accomplish.
  • Use the cd command to change into a trusted directory when your script starts. This way you have a known starting point.
  • When you’re writing a script, always assume that it will be installed and run incorrectly. If it’s designed to be in a directory that’s only readable/writable by the owner, and it holds sensitive information, assume that it’s going to be placed in a world writable directory with full permissions for everyone. Don’t hard code an installation directory into your script unless you have to.
  • Don’t assume that your script is always going to be run as a regular user, or just as the super user. You need to understand what your script will do when run by unprivileged and privileged users.
  • Attempt to keep your scripts and files out of world writable directories like /tmp as much as possible.
  • Don’t give users access to programs with shell escapes (like vi and vim) from your scripts, especially when elevated privileges are involved.
  • Do not rely only on one security technique to protect your script and your users. Putting all your faith in a method like “security through obscurity” (such as password encryption) while ignoring all of the other security tools in your box is asking for trouble. Some security methods can give you a false sense of security, and you need to be vigilant. Remember, try to make the crackers life as difficult as you possibly can. This involves a multi-tiered script security strategy.
  • Use secure versions of commands in your scripts whenever possible. For instance, use ssh and scp instead of telnet and rcp, or the slocate command rather than the locate command. The man page for the base command will sometimes point you toward the more secure versions.
  • Have other coders look over your script to check it for problems and security holes. You can even post your script to various forums and ask them to try to break it for you.
  • Make sure that any startup and configuration scripts that you add to your system are as secure and bug free as possible. Don’t add a script to the system’s init or Upstart mechanism without testing it thoroughly.
  • When using information like passwords within your script, try not to store the information within environment variables. Instead use pipes and redirection. The data will be harder to access by a cracker.
  • When creating and running scripts you should follow the Rule Of Least Privilege by only giving the minimal set of privileges that the script needs to do it’s job. Also, make sure that you’ve designed the script well so that it doesn’t need elevated privileges unnecessarily. For instance, if a script works well with ownership of nobody and a permission string of 0700, don’t set the script to be owned by root and have permissions of 4777 .
  • In the appropriate context, use options for commands that tend to enhance security and resistance to bad input. For instance, the find command has an option -print0 that causes the output to be null terminated instead of newline terminated. The xargs command has a similar option (-0). These options can help ensure that input containing things like newlines won’t break your script. This requires extra study of what can go wrong with your script, and how to use the available commands to avoid anything going wrong.
  • If you have scripts shared via something like a download repository, consider giving your users md5 and/or sha1 sum values so that they can check the integrity of a script they download. If you’re emailing a script, you might want to use GPG so that you can do things like ensuring that the contents of the script have not been tampered with, and that a third party cannot read the contents of the script in transit.


These scripts are somewhat simplified and in most cases could be done other ways too, but they will work to illustrate the concepts. If you use these scripts, make sure you adapt them to your situation. Never run a script or command without understanding what it will do to your system.

This first script (Listing 29) is a compilation of the shell script code that I’ve demonstrated throughout this post. The code has been organized into functions and placed in a separate script that can be sourced to add security specific code to your own scripts. Keep in mind though that the functions in this script don’t give you comprehensive coverage. Once again, we’re barely scratching the surface.

Listing 29

#!/bin/bash - # File: # Script that you can source to add a few security features to # your own scripts. #Variables to store the old values of the IFS and PATH variables OLD_IFS="" OLD_PATH="" # Function to create "safe" temporary files. function create_temp { # Give preference to user tmp directory for security if [ -e "$HOME/tmp" ] then TEMP_DIR="$HOME/tmp" else TEMP_DIR="/tmp" fi # Construct a "safe" temp file using mktemp TEMP_FILE=$(mktemp --tmpdir=$TEMP_DIR XXXXXXXXXX) # Keep the file in an array to remove it later TEMPFILES+=( "$TEMP_FILE" ) } # Function that will keep this script from being run with any kind # of root privileges. function drop_root { # Check to see if we're running as root via sudo if [ $(/usr/bin/id -ur) -eq 0 ];then echo "This script cannot be run with sudo" exit 1 fi # Get the listing on this script INFO=$(ls -l $0) # Grab the permission at the SUID position PERM=$(echo "$INFO" | cut -d " " -f 1 | cut -c 4) # Grab the owner OWNER=$(echo "$INFO" | cut -d " " -f 3) # Check for the SUID bit and the owner of root if [ "$PERM" == "s" -a "$OWNER" == "root" ];then echo "This script cannot be run as SUID root" exit 1 fi } # Function that will get and scrub user input to make it safer to use. function scrub_input { # Grab the user's input read USR_INPUT # Remove all characters that aren't alphanumeric or newline USR_INPUT=$(echo "$USR_INPUT" | tr -cd '[[:alnum:]n]') } # Function that sets certain environment variables to known values function clear_vars { # Save the old variables so that they can be restored OLD_IFS="$IFS" OLD_PATH="$PATH" # Set the variables to known safer values IFS=$' tn' #Set IFS to include whitespace characters PATH='/bin:/usr/bin' #Assumed safe paths } # Function that restores environment variables to what the were at the # start of the script. function restore_vars { IFS="$OLD_IFS" PATH="$OLD_PATH" } # Function that attempts to run a command safely via the whereis command. function run_cmd { # Attempt to find the command with the whereis command CMD=$(whereis $1 | cut -d " " -f 2) # Check to make sure that the command was found if [ -f "$CMD" ];then eval "$CMD" else echo "The command $CMD was not found" exit 127 fi }

This script starts out with our new and improved function which creates relatively safe temp files for us (create_temp). This was taken directly from Listing 28 which we’ve already discussed. After that, there’s the drop_root function that encapsulates the functionality from Listing 21. We can just call this function at the beginning of the script to make sure that we’re not being run with sudo and that the script is not SUID root. This function merely warns the user and exits, it does not give up it’s root privileges like BASH does. The next function reads input from the user and then removes everything but alphanumeric characters and the newline character. This is taken from Listing 19. The next two functions deal with environment variables. The first (clear_vars) saves the old variable values for both IFS and PATH, and then sets new values for each. The restore_vars function uses the saved variable values to reset the variables back to their original condition. This is the same concept as what we talked about in Listing 3 enclosed in functions. The last function (run_cmd) is similar to Listing 4, but I’ve expanded it a little bit to check if a file with the name of the command exists or not before trying to run it. If the command exists, it is run via the eval command. If the command does not exist, we warn the user and exit.

Listing 30 shows a simple script where I implement the collection of security specific functions in Listing 29.

Listing 30

#!/bin/bash - # File: # Script to test the sourcable script # Function to clean up after ourselves function clean_up { # Step through and delete all of the temp files for TMP_FILE in "${TEMPFILES[@]}" do # Make sure that the tempfile exists if [ -e "$TMP_FILE" ]; then echo "Temp file: $TMP_FILE" rm $TMP_FILE fi done # Reset the variables to their original values restore_vars } # Source the script that holds the security functions . # Make sure that we delete the temp files when we exit trap 'clean_up' EXIT # Array to hold the temporary files TEMPFILES=( ) # Variable to hold the user's input USR_INPUT="" # Make sure that we're not running with root privileges drop_root # Make sure that we have safe variables to work with clear_vars # Call the function that will create a temp file for us create_temp # Check to make sure that the temp file was created echo "${TEMPFILES[0]}" # Let the user know that input is expected printf "Please enter your input: " # Get and scrub the user input scrub_input # Test the user input echo $USR_INPUT # Try to safely run a command that exists run_cmd ls > /dev/null # Try to safely run a command that does not exist run_cmd foo

At the very top of the script I create a clean_up function that handles the removal of any temporary files, and calls the sourced function that restores the IFS and PATH variables to their original values. This function is used in the trap statement so that it will be called whenever the script exits. Just above the trap statement is where the script is sourced ( that gives us access to the security related functions. Continuing on down the script you see that I’ve created a couple of variables to hold the temporary file names and the user input. The names of these variables are from the sourced script. The sourced function drop_root ensures that the script is not being run with root privileges, and then clear_vars is called to make sure that IFS and PATH are safer to use. After that I call the create_temp function to set up a temporary file for me, and then immediately echo the name/path of the file by accessing the first element of the TEMPFILES array (echo "${TEMPFILES[0]}").

I prompt the user for input with an echo statement next, but instead of putting the read command directly in my script I call the scrub_input function and let it handle the task of getting the input from the user. When I ran the script I tried inputting several symbols that should not be allowed in the user input, and upon hitting enter I saw via the echo $USR_INPUT statement that the symbols were properly scrubbed from the input. The last two things that I do is to try to run two commands via the run_cmd function. The first time that I use the function I run the ls command, which I would expect to succeed. I use the > /dev/null section of the line to suppress the output from the ls command so that the output of the script doesn’t get too cluttered. The second command that I try to run with the run_cmd function is foo. I would not expect this command to be found, and have added it to show what the function does. Listing 31 shows the output that I get when I run the script in Listing 30.

Listing 31

$ ./ /home/jwright/tmp/mEAPJhqgyb Please enter your input: ?blog blog The command foo: was not found Temp file: /home/jwright/tmp/mEAPJhqgyb

When I check the /home/jwright/tmp folder for the temporary file, I see that it was properly deleted by the script. I also see that the ls command was found since there is no error, but the foo command was not. This is exactly what was expected. The example script in Listing 30 is not a real world script by any means, but works to show you how you would use the sourced script, and what order you might want to call the sourced functions in. As always, I welcome any input on corrections, additions, and tweaks that you think should be added to these scripts or any scripts in this post. Tell me what you think in the comments section.


If you get any capital letters in the symbolic permission string for a file, it means that something is wrong. Usually if you get a capital “S” in the string, it means that you need to set execute rights for the owner or the group. A capital “T” means that you set the sticky bit without setting the execute permission for other/world on the file or directory.


As I stated when we started this post, I haven’t been able to cover every aspect shell script security, and for the most part I avoided the issue of system security as that’s an even larger (but related) subject. It’s simply been my hope that I’ve given you a good starting point to plug some of the common security holes in your own scripts. Using this as a starting point, have a look at the Resources section for more information, and make sure to take opportunities to continue your learning on script, program, and system security whenever they arise.




  1. Purdue University’s Center for Education and Research in Information Assurance and Security
  2. Ubuntu Forums Announcement A Few Malicious Commands To Avoid In Forums/Posts/Lists
  3. Article On The shc Utility That Encrypts Shell Scripts
  4. shc Utility Homepage
  5. Linux Journal Article On Security Concerns When Using shc
  6. 7 Tips On Script Security By James Turnbull (requires registration)
  7. TLDP Advanced Bash-Scripting Guide: Chapter 35 – Miscellany
  8. Mac OS X Article On Shell Script Security That Gives Examples Of Attacks
  9. Article From On SUID Shell Scripts
  10. Practical Unix & Internet Security – Chapter 5 – Section 5.5 (SUID)
  11. Practical Unix & Internet Security – Chapter 23 – Writing Secure SUID and Network Programs
  12. Help Net Security Article On Unix Shell Scripting Malware
  13. More SUID Vulnerability Information
  14. Short Article On Linuxtopia About THe Dangers Of Running Untrusted Shell Scripts
  15. Article On Useful Shell Utilities For Scripts
  16. Examples Of Risky Scripts To Use With sudo
  17. Very Good Article On Creating Safe Temporary Files
  18. IBM developerWorks Article: Secure programmer: Developing secure programs
  19. IBM developerWorks Article: Secure programmer: Validating input
  20. IBM developerWorks Article: Secure programmer: Keep an eye on inputs
  21. Article on SUID, SGID, and Stick Bits In Linux And Unix