Search Posts on Binpipe Blog

Time Limit Using IPTABLES

Many people might not have gauged the power of IPTABLES!

One of the nice features is that you can use it to limit the time limit outbound traffic is allowed to pass through this firewall. (Say you have a windows network and this Linux machine is the gateway and you want to limit the time limit within which the user can access internet using iptables.)
You will realise that most software firewalls like Untangle or Smoothwall dont provide the feature of Time Limit Control in their open source versions. So fear not, heres the way to go!

To limit Time of Internet Access Allowed (You can change the time parameters in the rule as per your requirement):

iptables -I FORWARD 7 -s <SOURCEIP> -p tcp -m multiport –dport http,https -o eth0 -i eth1 -m time –timestart 16:00 –timestop 18:00 –weekdays Mon,Tue,Wed,Thu,Fri,Sat,Sun -j ACCEPT

Limit Bandwidth Using IPTABLES

Many people might not have gauged the power of IPTABLES!

One of the nice features is that you can use it to limit the bandwidth of outbound traffic going through this firewall. (Say you have a windows network and this Linux machine is the gateway and you want to limit bandwidth of a particular machine using iptables.)
You will realise that most software firewalls like Untangle or Smoothwall don't provide the feature of bandwidth quota control in their open source versions. So fear not, heres the way to go!

To limit Bandwidth Quota (The below example will allot within 2GB max usage):

iptables -I FORWARD 5  -s <SOURCEIP> -p tcp -m quota –quota 2147483648 -j ACCEPT
iptables -I FORWARD 6  -s <SOURCEIP>  -j DROP

Load Average Explained

System Administrators and users working with Unix / Linux Servers must have noticed the "Load Average" but never really put much thought into how this number is generated. So let us dis cus this parameter in detail.
Load averages are generated using many different metrics, disk load, CPU usage, memory usage and much more. The load average number is the total number of processes waiting in the run queue.

One of the ways to view the Load Average is running the "top" command. Using "top" provides insight into the system’s general health status. The top command provides a view of the following main health statistics:

    Uptime (including days, hours and minutes, the user count and current time and load average)
    The total number of processes along with the number of running processes and sleeping processes
    Memory usage including total memory, used and free memory
    Swap memory usage (useful for troubleshooting slow systems)

The top command looks like this:

The topmost process on the top process list is the process using the highest percentage of CPU. The top command is available on most Unix and Linux variants.


As we’ll learn, CPU usage is not directly related to load average. Load average is an overall view of the system. Load Average value can be high generally for one of the following reasons:

1. CPU it self is busy/overloaded  in processing things
2. Processes  (typically called Blocking process) in run queue, waiting for I/O

If the first two figures %us and %sy are nearly 90% then Cpu is overloaded and needs to be upgrade. If the 5th figure in same line %wa is shows high numbers means there are some jobs in run queue waiting for I/O (may be trying to read data from mounting disk). Then look for that.

To diagnose which process causing this just run a command and look ‘D’ under 8th column STAT one. There may be lots of R and S as well.:
# ps faux

The explanation of the symbols D, R & S are given below:

D —> Waiting for either (CPU, Disk I/O, Network I/O)
R —-> Running
S —–> Sleeping

Also you can use the command below to find the process with stat D :

# ps axo stat,pid | grep D

One quick rule of thumb I try to use (to make sure systems do not see any latency … e.g. slow processes, slow page loads, slow queries etc…) is to keep the number of waiting processes in the run queue (the load average represents total number of processes that had to wait for resources in the last 1, 5 and 15 minutes) under the total number of processors in the machine.

To check the number of processors (recognized by the Unix/Linux OS) run the following command:

# cat /proc/cpuinfo | grep "processor" | wc -l

Keep in mind this command will return the total number of recognized processors. If you have a hyper-threaded Pentium IV you’ll see two processors when really you only have one core. The same rule applies with the load average rule of thumb.

Remember, keeping the load average under the total processor count will make for a healthy and fast-responding system.

Show All Cronjobs For All Users


To list all cronjobs for all users in a server you can run the following one-liner command in linux:

for user in $(cut -f1 -d: /etc/passwd); do echo "#### CRONJOBS FOR $user ####:";crontab -u $user -l;  done


Alternatively, you can create a shell script and put the following content in it and execute it:

#!/bin/bash
for user in $(cut -f1 -d: /etc/passwd); do echo "#### CRONJOBS FOR $user ####:";crontab -u $user -l;  done
exit 0


Note: Don't forget to check the system cron files like cron.daily / hourly etc because this script doesn't take them into account.

Extract a Table from Mysqldump File


Sometimes, you may need to extract a single table from a mysqldump file, because the database size is very large and you are not being able to open that file with any text editor. In such a case you will have to use the "awk" command in Linux to extract the required table from the dump. You can use your innovation to use this concept for other file manipulation of similar kinds.

First, you have to know where in your mysqldump output you want to begin your extraction, and where you want to end it. The key here is finding something unique at the beginning and ending of the block that won’t be found anywhere else.

A sample mysqldump contains something like the following:

--
-- Table structure for table `table1`
--
...
DROP TABLE IF EXISTS `table1`;
CREATE TABLE `test1` ( ...
LOCK TABLES `test1` WRITE;
INSERT INTO `test1` VALUES (1,0,’2 ...
UNLOCK TABLES;
...
–-
–- Table structure for table `table2`
–-
As you can see, we have a line with the comment "Table structure for table `table1`", then all of the dropping, creating, and inserting for the table, and then another comment for the next table. These two lines are perfect for grabbing all of the operations pertinent to our one table.

To extract the dump for a single table from an entire database dump, run the following from a command prompt:

#     awk '/Table structure for table `table1`/,/Table structure for table `table2`/{print}' databasedump.sql > extracted_table.sql

The above command searches through the dump file, and as soon as it matches a line containing the first search string (denoted by the first set of slashes), it prints that line and every subsequent line until it encounters a line containing the second search string (denoted by the second set of slashes). FYI, the periods surrounding the table names above are wildcard characters.
Now the extracted_table.sql file contains the SQL to restore your table.
Finally, There are usually various parameters at the top of your mysqldump file that you may need to set before restoring your table, depending on the complexity of your database (i.e. disabling foreign key checks.)

To restore your table, run:

# mysql -u user -ppassword databasename < extracted_table.sql

Shell Script To List All Files Changed On That Day

Sometimes System Admin's are asked to become sleuths and find out which files were modified on a specific day to find out what went wrong.

This script will list all the files in a directory that have been modified that day.

1. Copy the Shell script below and save it as modified_today.sh
2. Make it executable by chmod +x modified_today.sh
3. Now run the script as per the syntax below-

The syntax is :
sh modified_today.sh /name/of/directory

Note: You can also modify the 'mtime' parameter to make it display data for 'n' number of days as required. Enjoy ;-)

#!/bin/bash
# Script Name : modified_today.sh
# Modifications :
# Description : Lists all the files in a directory that have been modified that day

#################################
# Start of procedures/functions #
#################################

funct_check_params()    # Function Name
{    # Start of the function
  if [ ${NARG} -ne 1 ]; then    # If the number of arguments is not one, then output a message
    echo "$0 : Not enough Parameters passed, you need to supply a directory"
    exit 1 # Quit the program
  elif

  # If the argument passed is -h or --h then display the following message in the echo statement

  [[ ${SLICE} = "-h" ]] || [[ ${SLICE} = "--h" ]]; then
echo "Usage: You need to add a slice after the script name, e.g $0 /opt"
    exit 1 # Quit the program
  fi    # End of the if statement
}    # End of the function

funct_find_files()    # Function Name
{    # Start of the function
  find $SLICE -type f -mtime -1 > $LOGFILE    # Find all the files and put them into a logfile

  for files in $(cat $LOGFILE)    # Loop through all the files and show a long listing
    do
ls -l $files
  done
}    # End of the function

################
# Main Program #
################

# Variable Settings

DATE=`date +"%d-%B-%Y"` ; export DATE # Set the DATE variable, format it as 9-September-2012
SLICE=$1    # Set the variable SLICE as the first argument passed
LOGFILE=/tmp/modifed_$DATE.log # Set the variable LOGFILE, the stores the files found
NARG=$# # Set the variable NARG to a number of arguments on the command line

{    # Start of the main program
  funct_check_params # Call the function funct_check_params
  funct_find_files # Call the function funct_file_files
}    # End of the main program

## End of Script