Chapter 26: Advanced Shell Scripting: Automating System Administration Tasks
Chapter Objectives
By the end of this chapter, you will be able to:
- Understand the role of automation in maintaining the stability and reliability of embedded Linux systems.
- Implement robust shell scripts using advanced techniques like functions, signal handling, and error checking.
- Develop practical scripts for critical system administration tasks, including automated backups and resource monitoring.
- Configure the
cron
daemon to schedule and execute scripts at predefined intervals. - Debug common shell scripting errors related to permissions, paths, and command syntax.
- Apply best practices for writing secure, efficient, and maintainable shell scripts for embedded environments.
Introduction
Systems are often deployed in remote or inaccessible locations where direct human intervention is impractical or impossible in embedded Linux. Consider a weather monitoring station on a remote mountaintop or a fleet of industrial sensors deployed across a factory floor. These devices must operate autonomously, reliably, and predictably for months or even years. This is where the power of automation through shell scripting becomes not just a convenience, but a fundamental requirement for robust system administration. While previous chapters introduced the basics of the Linux shell, this chapter elevates that knowledge into a professional skill set, focusing on creating intelligent scripts that act as tireless digital administrators for your Raspberry Pi 5.
We will move beyond simple command sequences and explore the art of crafting scripts that can make decisions, handle unexpected errors, and perform complex tasks without supervision. You will learn to write scripts that safeguard critical data by performing automated backups, keep a watchful eye on system health by monitoring resources like CPU and memory, and execute routine maintenance. The core tool for this automation is the shell, and the language is its scripting syntax, a powerful and universally available resource on any Linux system. By mastering the techniques in this chapter, you will transition from being a user who simply commands the system to an architect who can program its behavior, ensuring your embedded devices are not just functional, but truly resilient.
Technical Background
The Philosophy of Automation in Embedded Systems
At its heart, automation in embedded systems is driven by the need for unattended reliability. An embedded device, unlike a desktop computer, is expected to recover from common issues and maintain its operational state without a user sitting in front of it. A well-crafted shell script can be thought of as a pre-programmed set of instructions for the system to follow when certain conditions are met or at specific times. This programmatic approach to system management is essential for tasks that are repetitive, prone to human error, or need to be performed at times when an administrator is not available.
mindmap root((Embedded Systems<br>Automation)) Need: Unattended Reliability Remote/Inaccessible Locations Long-term Operation Autonomous Recovery Mechanism: Shell Scripting Orchestrate Commands Manipulate Text Control Processes Core Tasks Repetitive Maintenance Data Backups Resource Monitoring Outcome: Robust Systems Stable & Predictable Resilient to Errors Reduced Human Intervention
The shell, often Bash (Bourne-Again Shell) on most Linux distributions including Raspberry Pi OS, provides a rich set of features for this purpose. It is more than just a command interpreter; it is a complete programming environment. The beauty of shell scripting lies in its simplicity and power. It allows you to orchestrate the execution of other programs, manipulate text streams, and control system processes using the very same tools you use on the command line. This creates a seamless transition from interactive administration to automated administration.
Building Blocks of an Advanced Script
A simple script executes a linear sequence of commands. An advanced script, however, anticipates complexity. It is structured, resilient, and communicative. Let’s explore the foundational concepts that enable this leap in sophistication.
Functions: Creating Reusable Code Blocks
As scripts grow in complexity, you will often find yourself repeating the same set of commands. Functions allow you to encapsulate these repeating blocks of code under a single name, promoting modularity and readability. A function can be called multiple times throughout a script, simplifying maintenance. If you need to change the logic, you only need to update it in one place: within the function definition.
graph TD subgraph "Script Structure" direction TB subgraph "Function Definition (Defined Once)" direction TB FuncDef["<b>function log_message() {</b><br> <i># Code to format and echo a message...</i><br><b>}</b>"] end subgraph "Main Script Logic" direction TB Start[Start Script] --> Call1("Call log_message<br><i>Starting process...</i>"); Call1 --> Step2[Perform Action A]; Step2 --> Call2("Call log_message<br><i>Action A complete.</i>"); Call2 --> Step3[Perform Action B]; Step3 --> Call3("Call log_message<br><i>Finished.</i>"); Call3 --> End[End Script]; end Call1 --> FuncDef; Call2 --> FuncDef; Call3 --> FuncDef; end %% Styling classDef func fill:#8b5cf6,stroke:#8b5cf6,stroke-width:2px,color:#ffffff; classDef process fill:#0d9488,stroke:#0d9488,stroke-width:1px,color:#ffffff; classDef start fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef endo fill:#10b981,stroke:#10b981,stroke-width:2px,color:#ffffff; classDef callo fill:#eab308,stroke:#eab308,stroke-width:1px,color:#1f2937; class FuncDef func; class Start start; class End endo; class Step2,Step3 process; class Call1,Call2,Call3 callo;
A function in Bash is defined using the following syntax:
function_name() {
# Commands to be executed
# ...
return 0 # Optional return status
}
For instance, a function to log messages with a timestamp is a common utility in administration scripts. This centralizes the format of log messages and makes the main body of the script cleaner. The script can then simply call log_message "Starting backup process."
instead of repeatedly writing out the echo
and date
commands.
Signal Handling: Graceful Exits and Cleanup
What happens if a user presses Ctrl+C
while your script is in the middle of writing to a critical file? Without proper handling, this interruption (which sends an INT
signal) could leave the system in an inconsistent state with a partially written file. The trap
command is the shell’s mechanism for catching these signals and executing a specific piece of code before the script terminates.
The syntax is trap 'command_to_run' SIGNAL_NAME
. For example:
trap 'cleanup; exit 1' INT TERM QUIT
This command tells the shell to execute a function named cleanup
and then exit if it receives the INT
(interrupt), TERM
(terminate), or QUIT
signals. The cleanup
function might remove temporary files, close network connections, or restore a configuration file to its original state, ensuring the script doesn’t leave a mess behind. This is crucial for scripts that modify the system state.
graph TD subgraph Script Execution Flow A[Start Script] --> B{Main Logic Running...}; B --> C{"Signal Received?<br><i>e.g., Ctrl+C (SIGINT)</i>"}; end subgraph "With 'trap' Command" C -- Yes --> D[trap Intercepts Signal]; D --> E["Execute Cleanup Function<br><i>(e.g., remove temp files)</i>"]; E --> F[Script Exits Gracefully]; end subgraph "Without 'trap' Command" C -- No --> G{...Continue Logic...}; G --> H[Normal Completion]; end subgraph Abrupt Termination C -- Yes, but no trap --> I[Script Terminates Immediately<br><b>Potential for leftover files<br>or inconsistent state</b>]; end %% Styling classDef start fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef process fill:#0d9488,stroke:#0d9488,stroke-width:1px,color:#ffffff; classDef decision fill:#f59e0b,stroke:#f59e0b,stroke-width:1px,color:#ffffff; classDef trap fill:#8b5cf6,stroke:#8b5cf6,stroke-width:1px,color:#ffffff; classDef cleanup fill:#eab308,stroke:#eab308,stroke-width:1px,color:#1f2937; classDef success fill:#10b981,stroke:#10b981,stroke-width:2px,color:#ffffff; classDef fail fill:#ef4444,stroke:#ef4444,stroke-width:1px,color:#ffffff; class A start; class B,G process; class C decision; class D trap; class E cleanup; class H,F success; class I fail;
Robust Error Checking: The set
Command
By default, a shell script will continue to execute commands even after one of them fails. In an automated script, this is dangerous. If a command to mount a backup drive fails, you certainly do not want the script to proceed with the rsync
command, which might then erroneously write data to the root filesystem if the mount point directory exists.
graph TD subgraph "Scenario: Pipeline Command Fails" Pipe["command1 | command2"] end subgraph "Robust Behavior (With 'set -euo pipefail')" F[Start] --> G{command1 fails}; G -- "'pipefail' detects failure" --> H[Script Exits Immediately]; H --> I[System State is Safe]; end subgraph "Default Behavior (No 'set -e' or 'set -o pipefail')" A[Start] --> B{command1 fails}; B -- "Pipeline continues" --> C{command2 succeeds}; C -- "Script continues" --> D[Dangerous Command<br><i>e.g., rsync --delete</i>]; D --> E[Potential Data Loss!]; end %% Styling classDef start fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef process fill:#0d9488,stroke:#0d9488,stroke-width:1px,color:#ffffff; classDef decision fill:#f59e0b,stroke:#f59e0b,stroke-width:1px,color:#ffffff; classDef fail fill:#ef4444,stroke:#ef4444,stroke-width:1px,color:#ffffff; classDef success fill:#10b981,stroke:#10b981,stroke-width:2px,color:#ffffff; classDef exit fill:#8b5cf6,stroke:#8b5cf6,stroke-width:1px,color:#ffffff; class A,F start; class C,D process; class B,G decision; class E fail; class H exit; class I success;
The set
built-in command provides several options to make scripts safer. The two most important are:
set -e
(orset -o errexit
): This command ensures that the script will exit immediately if any command fails (returns a non-zero exit status). This “fail-fast” approach prevents subsequent commands from running in an unexpected state.set -u
(orset -o nounset
): This treats the use of an uninitialized variable as an error and causes the script to exit. This helps catch typos in variable names that might otherwise go unnoticed, leading to unpredictable behavior.set -o pipefail
: This is a subtle but critical option. In a pipeline of commands (e.g.,command1 | command2
), the exit status of the pipeline is typically the exit status of the last command. Ifcommand1
fails butcommand2
succeeds, the pipeline is considered successful.set -o pipefail
changes this behavior, making the pipeline’s exit status that of the rightmost command to exit with a non-zero status, or zero if all commands succeed. This ensures that failures anywhere in the pipeline are detected.
Placing set -euo pipefail
at the beginning of your scripts is a widely accepted best practice for writing robust, predictable automation.
The Scheduler: cron
A script is only as useful as its execution context. For automation, we often need scripts to run at specific times or regular intervals. The cron
daemon is the standard Linux utility for time-based job scheduling. It runs in the background, waking up every minute to check a set of configuration files (called “crontabs”) for jobs to execute.
Each user has their own crontab, which can be edited by running crontab -e
. The syntax of a crontab entry consists of five time-and-date fields, followed by the command to be executed.
graph TD subgraph Crontab Entry Structure direction TB A(Minute) --> B(Hour) --> C(Day of Month) --> D(Month) --> E(Day of Week) --> F(Command); end subgraph Details direction TB A---A_Details(<b>0-59</b>); B---B_Details(<b>0-23</b>); C---C_Details(<b>1-31</b>); D---D_Details(<b>1-12</b> or <b>JAN-DEC</b>); E---E_Details(<b>0-6</b> or <b>SUN-SAT</b><br><i>Both 0 and 7 are Sunday</i>); F---F_Details(<b>Absolute path to script or command</b><br><i>e.g., /usr/local/bin/backup.sh</i>); end %% Styling classDef field fill:#0d9488,stroke:#0d9488,stroke-width:1px,color:#ffffff; classDef command fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef details fill:#f8fafc,stroke:#64748b,stroke-width:1px,color:#1f2937; classDef example fill:#eab308,stroke:#eab308,stroke-width:1px,color:#1f2937; class A,B,C,D,E field; class F command; class A_Details,B_Details,C_Details,D_Details,E_Details,F_Details,CMD_Explain details; class MIN,HR,DOM,MON,DOW,CMD example;
The format is as follows:
For example, to run a backup script located at /usr/local/bin/backup.sh
every day at 2:30 AM, the crontab entry would be:
30 2 * * * /usr/local/bin/backup.sh
A common pitfall when using cron
is its minimal environment. Scripts run via cron
do not inherit the $PATH
and other environment variables from your interactive shell session. Therefore, it is imperative to use absolute paths for all commands and files within a script intended for cron
execution. For example, instead of just rsync
, you should use /usr/bin/rsync
.
Essential Tools for Administration Scripts
Shell scripts often act as “glue” that orchestrates more powerful, specialized command-line tools. For system administration, a few tools are indispensable.
rsync
: The Powerhouse of Synchronization
rsync
(remote sync) is a highly versatile utility for copying and synchronizing files and directories. Its true power lies in its efficiency. When used to update a backup, rsync
employs a delta-transfer algorithm, which means it only sends the differences between the source and destination files, not the entire file if it has only changed slightly. This is incredibly efficient, especially over slow network connections or for large files.
Key features for backup scripts include:
- Archiving (
-a
): A convenient shorthand for a set of options that preserves permissions, ownership, timestamps, and symbolic links, which is essential for a true backup. - Deletion (
--delete
): This option deletes files from the destination if they are no longer present in the source, keeping the backup an exact mirror. - Exclusion (
--exclude
): Allows you to specify files or directories to omit from the backup, such as temporary files or caches.
awk
, sed
, and grep
: The Text Processing Trio
Embedded systems generate a vast amount of text-based data in the form of log files and command outputs. Extracting meaningful information from this text is a core task of monitoring scripts.
graph TD A["Raw Output<br><i>(e.g., from 'df -h')</i>"] --> B("pipe: |"); B --> C["<b>grep</b><br><i>Filter lines based<br>on a pattern, e.g., '/dev/root'</i>"]; C --> D("pipe: |"); D --> E["<b>awk</b><br><i>Extract specific columns<br>e.g., '{print $5}' to get usage %</i>"]; E --> F("pipe: |"); F --> G["<b>sed</b><br><i>Modify the text<br>e.g., 's/%//' to remove the '%' sign</i>"]; G --> H[Clean Data<br><i>'90'</i>]; %% Styling classDef start fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef tool fill:#8b5cf6,stroke:#8b5cf6,stroke-width:1px,color:#ffffff; classDef pipe fill:#64748b,stroke:#64748b,stroke-width:1px,color:#ffffff; classDef endo fill:#10b981,stroke:#10b981,stroke-width:2px,color:#ffffff; class A start; class C,E,G tool; class B,D,F pipe; class H endo;
grep
is used for searching and filtering text based on patterns. For example,grep 'ERROR' /var/log/syslog
will display only the lines containing the word “ERROR”.sed
(stream editor) is used for performing text transformations on an input stream. It can perform search-and-replace operations, deletions, and insertions.awk
is a more advanced text-processing utility that is essentially a programming language in its own right. It processes text line by line and can perform calculations, reformat output, and generate reports. For example,df -h | awk '/\/dev\/root/ {print $5}'
can be used to extract the percentage of disk space used for the root filesystem.awk
is particularly powerful for parsing columnar data.
Understanding how to pipe the output of one command into these utilities is a fundamental skill. A typical monitoring task might involve getting raw data from a command like vmstat
, filtering it with grep
, and then using awk
to extract and format the specific values needed.
Practical Examples
This section provides complete, hands-on examples of scripts for common administration tasks on a Raspberry Pi 5. Each example includes the script itself, setup instructions, and an explanation of its operation.
Tip: Before you begin, create a directory for your scripts. A good practice is to use
/usr/local/bin
for system-wide scripts or~/bin
for user-specific ones. For these examples, let’s use a local directory:mkdir -p ~/admin-scripts && cd ~/admin-scripts
.
Example 1: Automated Backup Script
This script will perform a daily backup of a user’s home directory to an external USB drive. It will be robust, logging its actions and handling potential errors gracefully.
Hardware Setup
1. External Drive: You will need a USB flash drive or external hard drive.
2. Formatting: Format the drive with a Linux filesystem like ext4
. Plug it into your computer and use a tool like gparted
or the command line (sudo mkfs.ext4 /dev/sdX1
, replacing sdX1
with your drive’s partition).
3. Mount Point: Create a directory that will serve as the mount point for the drive on your Raspberry Pi 5.
sudo mkdir /mnt/backup
4. Auto-Mounting: To ensure the drive is available after a reboot, configure it to mount automatically. First, find the UUID
of your drive’s partition with ls -l /dev/disk/by-uuid/
. Then, add a line to /etc/fstab
:
# Example line for /etc/fstab
UUID=YOUR_DRIVE_UUID /mnt/backup ext4 defaults,nofail 0 2
The nofail
option is important; it prevents the system from halting the boot process if the drive is not connected.
The Backup Script (backup.sh
)
Create the file ~/admin-scripts/backup.sh
with the following content.
#!/bin/bash
# =============================================================================
#
# backup.sh: A robust script to back up a source directory to a destination
# using rsync. Includes logging, error handling, and lock file
# to prevent concurrent runs.
#
# =============================================================================
# --- Configuration ---
# Use 'readlink -f' to get the absolute path, which is safer for scripts.
# This makes the script runnable from any directory.
readonly SCRIPT_DIR="$(readlink -f "$(dirname "$0")")"
readonly SCRIPT_NAME="$(basename "$0")"
# Source directory to back up.
# Using ${HOME} is generally safe, but an absolute path is more explicit.
readonly SOURCE_DIR="/home/pi"
# Destination directory for the backup.
readonly DEST_DIR="/mnt/backup/pi_home_backup"
# Log file for recording script activity.
readonly LOG_FILE="${SCRIPT_DIR}/backup.log"
# Lock file to prevent the script from running more than once at a time.
readonly LOCK_FILE="/tmp/${SCRIPT_NAME}.lock"
# --- Script Body ---
# Exit immediately if a command exits with a non-zero status.
set -o errexit
# Treat unset variables as an error when substituting.
set -o nounset
# Pipelines return the exit status of the last command to fail.
set -o pipefail
# --- Functions ---
# Function to log messages to the console and a log file.
log_message() {
local message="$1"
echo "$(date '+%Y-%m-%d %H:%M:%S') - ${message}" | tee -a "${LOG_FILE}"
}
# Function to perform cleanup tasks on script exit.
# This is called by the 'trap' command.
cleanup() {
log_message "Cleanup: Removing lock file."
rm -f "${LOCK_FILE}"
# Any other cleanup tasks can go here.
}
# --- Main Logic ---
# Set a trap to call the cleanup function on script exit or interruption.
trap cleanup EXIT SIGINT SIGTERM
log_message "--- Backup Script Started ---"
# Check for a lock file. If it exists, another instance is running.
if [ -e "${LOCK_FILE}" ]; then
log_message "Error: Lock file found at ${LOCK_FILE}. Another instance may be running."
exit 1
else
# Create the lock file.
touch "${LOCK_FILE}"
log_message "Lock file created."
fi
# Check if the backup destination directory is mounted and writable.
# The '-d' checks if it's a directory, '-w' checks if it's writable.
if [ ! -d "${DEST_DIR}" ] || [ ! -w "${DEST_DIR}" ]; then
log_message "Error: Backup destination ${DEST_DIR} is not a writable directory."
log_message "Check if the external drive is mounted correctly."
exit 1
fi
log_message "Source directory: ${SOURCE_DIR}"
log_message "Destination directory: ${DEST_DIR}"
log_message "Starting rsync..."
# The rsync command.
# -a: archive mode (preserves permissions, ownership, etc.)
# -v: verbose (for logging purposes)
# --delete: deletes files in the destination that are not in the source
# --exclude: ignores specified files/directories
# The output of rsync is redirected to the log file.
/usr/bin/rsync -av --delete \
--exclude ".cache" \
--exclude "Downloads" \
"${SOURCE_DIR}/" "${DEST_DIR}/" >> "${LOG_FILE}" 2>&1
# 2>&1 redirects stderr to stdout, so both are captured in the log.
log_message "Rsync completed successfully."
log_message "--- Backup Script Finished ---"
# The trap will automatically call cleanup() upon normal exit.
exit 0
Making the Script Executable and Scheduling It
1. Set Permissions:
chmod +x ~/admin-scripts/backup.sh
2. Perform a Manual Run: It’s crucial to test the script manually before automating it.
~/admin-scripts/backup.sh
Check the output on the console and inspect the log file (~/admin-scripts/backup.log
) and the backup directory (/mnt/backup/pi_home_backup
) to ensure it worked as expected.
3. Schedule with cron
: Open the crontab for editing:
crontab -e
Add the following line to the file to run the script every day at 3:00 AM.
# Run the home directory backup script daily at 3:00 AM
0 3 * * * /home/pi/admin-scripts/backup.sh
Save and exit the editor. cron
will automatically load the new schedule.
Example 2: System Resource Monitoring Script
This script will monitor CPU temperature, memory usage, and disk space. If any of these metrics cross a predefined threshold, it will send a notification (in this case, by writing to a dedicated alert log).
The Monitoring Script (monitor.sh
)
Create the file ~/admin-scripts/monitor.sh
.
#!/bin/bash
# =============================================================================
#
# monitor.sh: Monitors key system resources (CPU temp, memory, disk)
# and logs an alert if thresholds are exceeded.
#
# =============================================================================
# --- Configuration ---
readonly SCRIPT_DIR="$(readlink -f "$(dirname "$0")")"
# Thresholds
readonly MAX_CPU_TEMP=75 # Degrees Celsius
readonly MAX_MEM_USAGE=85 # Percentage
readonly MAX_DISK_USAGE=90 # Percentage
# Log file for alerts.
readonly ALERT_LOG="${SCRIPT_DIR}/alerts.log"
# --- Functions ---
log_alert() {
local message="$1"
echo "$(date '+%Y-%m-%d %H:%M:%S') - ALERT: ${message}" >> "${ALERT_LOG}"
}
# --- Main Logic ---
set -o nounset
set -o pipefail
# 1. Check CPU Temperature
# The temperature is stored in /sys/class/thermal/thermal_zone0/temp
# It's in millidegrees Celsius, so divide by 1000.
# Using 'bc' for floating point arithmetic.
current_cpu_temp=$(cat /sys/class/thermal/thermal_zone0/temp)
current_cpu_temp_c=$((current_cpu_temp / 1000))
if [ "${current_cpu_temp_c}" -gt "${MAX_CPU_TEMP}" ]; then
log_alert "CPU temperature is high: ${current_cpu_temp_c}°C (Threshold: ${MAX_CPU_TEMP}°C)"
fi
# 2. Check Memory Usage
# Use 'free' and 'awk' to get the percentage of used memory.
# The awk script calculates (used / total) * 100.
current_mem_usage=$(free | awk '/Mem/ {printf "%.0f", $3/$2 * 100.0}')
if [ "${current_mem_usage}" -gt "${MAX_MEM_USAGE}" ]; then
log_alert "Memory usage is high: ${current_mem_usage}% (Threshold: ${MAX_MEM_USAGE}%)"
fi
# 3. Check Disk Usage for the root filesystem
# Use 'df' and 'awk' to get the usage percentage for the root partition.
current_disk_usage=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "${current_disk_usage}" -gt "${MAX_DISK_USAGE}" ]; then
log_alert "Root disk usage is high: ${current_disk_usage}% (Threshold: ${MAX_DISK_USAGE}%)"
fi
exit 0
graph TD A[Start monitor.sh] --> B[Check CPU Temp]; B --> C{Temp > 75°C?}; C -- Yes --> D[Log CPU Alert]; C -- No --> E; D --> E[Check Memory Usage]; E --> F{Mem > 85%?}; F -- Yes --> G[Log Memory Alert]; F -- No --> H; G --> H[Check Disk Usage]; H --> I{Disk > 90%?}; I -- Yes --> J[Log Disk Alert]; I -- No --> K; J --> K[End Script]; %% Styling classDef start fill:#1e3a8a,stroke:#1e3a8a,stroke-width:2px,color:#ffffff; classDef process fill:#0d9488,stroke:#0d9488,stroke-width:1px,color:#ffffff; classDef decision fill:#f59e0b,stroke:#f59e0b,stroke-width:1px,color:#ffffff; classDef alert fill:#ef4444,stroke:#ef4444,stroke-width:1px,color:#ffffff; classDef endo fill:#10b981,stroke:#10b981,stroke-width:2px,color:#ffffff; class A start; class B,E,H process; class C,F,I decision; class D,G,J alert; class K endo;
Making the Script Executable and Scheduling It
1. Set Permissions:
chmod +x ~/admin-scripts/monitor.sh
2. Manual Run: Test the script to see its immediate output (it will only create the log if a threshold is breached).
~/admin-scripts/monitor.sh
To test the alert mechanism, you can temporarily set a threshold to a very low value (e.g., MAX_CPU_TEMP=20
) and run it again. Remember to change it back.
3. Schedule with cron
: Open the crontab again (crontab -e
) and add a line to run this script every 10 minutes.
# Run the system resource monitor every 10 minutes
*/10 * * * * /home/pi/admin-scripts/monitor.sh
This frequent execution ensures that you are alerted to potential problems in a timely manner.
Common Mistakes & Troubleshooting
Even experienced developers make mistakes when writing shell scripts. Here are some common pitfalls and how to avoid them.
Exercises
- Enhance the Backup Script: Modify the
backup.sh
script to create versioned backups. Instead of overwriting the same directory, the script should create a new directory named with the current date (e.g.,pi_home_backup_2025-07-08
). Also, add a feature to automatically delete backups older than 30 days to save space. - Create a Network Monitoring Script: Write a new script called
net_monitor.sh
that checks if the Raspberry Pi is connected to the internet. It should try toping
a reliable server (like8.8.8.8
). If the ping fails three times in a row, it should log an alert to thealerts.log
file. Schedule this script to run every 5 minutes. - Log File Rotation: The
backup.log
andalerts.log
files will grow indefinitely. Write a script calledlog_rotator.sh
that renamesbackup.log
tobackup.log.1
andalerts.log
toalerts.log.1
. Ifbackup.log.1
already exists, it should be deleted before the rename. This is a simplified version of what tools likelogrotate
do. Schedule this to run once a week. - Interactive Cleanup Script: Create a script called
system_cleaner.sh
that finds and lists all files in the/tmp
directory and the APT cache (/var/cache/apt/archives
) that are larger than 10MB. It should then prompt the user interactively, asking for confirmation (y/n
) before deleting each file. (Hint: Usefind
for locating files andread
for user input within awhile
loop).
Summary
- Automation is Key: Shell scripting is a fundamental skill for managing embedded Linux systems, enabling unattended and reliable operation.
- Robust Scripts are Structured: Advanced scripts use functions for modularity,
trap
for graceful cleanup, andset -euo pipefail
for robust error handling. cron
is the Workhorse of Scheduling: Thecron
daemon is the standard tool for executing scripts at scheduled times, but requires careful handling of environment variables and paths.- Leverage Standard Tools: Scripts act as orchestrators for powerful command-line utilities like
rsync
,grep
, andawk
, which do the heavy lifting of file synchronization and text processing. - Best Practices Prevent Disaster: Always quoting variables, using absolute paths in
cron
jobs, and ensuring correct file permissions are critical for writing reliable automation. - Monitoring Provides Insight: Regularly checking system health metrics like CPU temperature, memory, and disk usage can help you proactively identify and address issues before they become critical failures.
Further Reading
- Bash Guide for Beginners: An excellent, classic guide that covers the fundamentals of shell scripting.
- Advanced Bash-Scripting Guide: A comprehensive and deep dive into more complex scripting topics.
rsync
Official Documentation: The definitive source for allrsync
options and capabilities.- The
cron
andcrontab
man pages: The most authoritative reference, available on your system viaman 5 crontab
. An online version is also useful. - Google’s Shell Style Guide: An excellent resource for professional best practices on writing clean, readable, and maintainable shell scripts.
- Greg’s Wiki – BashFAQ: A curated list of frequently asked questions and common problems encountered in Bash scripting.