SSH Tunneling (Port Forwarding)

This is a short introduction to SSH tunnelling (also known as “port forwarding”). we have tried to describe it with some simple examples.

Let’s define our sample setup: We have a PC at home called linuxhome. We want to connect to a computer in london called linuxwork, but we are only allowed to connect to a gateway machine called linuxgate:       

The usual way would be a two step process: first connect from linuxhome to linuxgate and then from linuxgate to linuxwork. Let’s see how a tunnel can help:

* From a Unix-like machine

The following refers to OpenSSH 2.x and 3.x:

On linuxhome we execute this command:

ssh  -l userid  -L 7777:linuxwork:22  linuxgate  cat -

This means: open an ssh connection as user userid to host linuxgate and execute the command cat –. While the session is open, redirect all connections to port 7777 on the local machine to port 22 on machine linuxwork.

Now we can use any SSH command (ssh, slogin, scp, sftp) to connect directly to linuxwork through the tunnel. For example:

ssh -p 7777 localhost uname -a
slogin -p 7777 localhost
scp -p -P 7777 localhost:data/file1.txt .
sftp -o Port=7777 localhost

How it works:

The ssh process on the local machine linuxhome establishes an SSH connection with the sshd server process on the gateway machine linuxgate. It uses the well-known port 22 on the server side and some free port on the local machine, e.g. 605. In addition, because we have used the -L option, the local ssh process accepts local connections to port 7777 and sends all data received on this port through the other port 605 to linuxgate with some marking “this is from tunnel 7777”. The gateway linuxgate has been informed through the -L option that, whenever it receives data marked with “this is from tunnel 7777”, it has to open a connection to host linuxwork on port 22 and send it that data: 


More Details:


  1. The cat – command in the first ssh command is there only to keep the connection open. Any other command which does not finish could be used. It could be left blank, too, thereby opening a shell, but then you need a controlling terminal and cannot use the ssh command in a script. 
  2. You can use any port above 1024 and below 32768 for the -L option. 
  3. If you need to connect to several machines, then just specify more -L options in the first ssh command, one per machine, each with a different local port. For example:
    ssh -l userid -L 7777:linuxwork1:22 -L 7778:linuxwork2:22 
    -L 7779:linuxwork3:22 linuxgate cat -

    then use ssh -p 7777 localhost to connect to linuxwork1, ssh -p 7778 localhost to connect to linuxwork2, etc. 

  4. You can also redirect to other remote ports. For example, if machine linuxwork accepted telnet connections (port 23), then you could prepare the tunnel with:
    ssh -l userid -L 7777:linuxwork:23 gate cat -
    and then just telnet to linuxwork with this command:
    telnet localhost 7777

    The port numbers of usual network services can be found in file ‘/etc/services’.

  5. You can write a small script to setup the SSH tunnel for all connections you normally need and call that script automatically every time you connect from home to the Internet.
  6. You can define aliases for connections which you need very often. For example, if you do (in a tcsh):
    alias ssh linuxwork 'ssh -p 7777 localhost'

    then you can simply do things like:

    ssh linuxwork uname -a

    ssh linuxwork ps -ef

    sshwork (to login)
  7. With some more complex aliases or shell scripts you can almost work as with a direct connection. For example, if you do:
    alias ssh \
    'set target=`echo \!^ | sed -e "s/work/-p 7777 localhost/g"` ; \
    /usr/local/bin/ssh $target \!:2*'

    then you can do:

    ssh work ps -ef 
  8. If you use the -v option for the ssh command which prepares the tunnel, then you can see in its output whenever a connection is established through the tunnel (and other debug messages).


Spell checking in “vi or vim” editor.

Some times while editing any configuration files on linux or unix server, we need to do check spellings. For that we go for help of dictionary.
But we can do this in vi editor itself, while editing it.
Just for checking open one test.txt file with vi editor.
type some wrong spell in it.
Now save this file with :w! ( Press enter )
Now do : !spell -b ( press enter )
Check the bottom line you will get the wrong spell’s list over here.

Cool na !

Enjoy vi editor.

Save time by using ps command with writing small shell script on

Very easy and powerful way to use ps command on linux servers.

We all know that “ps” command is very useful to capture the running processes on linux live servers. Majority time we are using the ps command along with grep command to find a specific process.

Here we had tried to write a script to save your very important time, while executing “ps” command.

With this below script you did not require to type “ps” command every time with grep, and even you will get the same result.

Just try to create below file in “/usr/local/bin/psgrep” and then paste it with the following lines in it.


# ! /bin/bash

#######Above line is called “she bang” in shell scripting ########

# psgrep – below is the string which will search for a process by pattern.

ps –ef | grep –v grep | grep $1


Now just save this file. And then change the permissions of this file, make it for world executable.

Now whenever you want to use “ps” command then you have just need to type ‘psgrep httpd’ to grep the process httpd.

This is really cool tip of “ps” command. This will save yout too much time when you are working on linux servers.

We had tested this on Redhat linux, fedora linux and ubuntu linux. You can also try on your linux desktop. Hope this will save your important time and your

work will become faster. This is linux’s power. 🙂

Remote Desktop connection in Linux

We know that in corporate network there are both Linux and proprietary software desktops. In such environment if you want to access your proprietary software machine from your Linux desktop, at that time you can use “Rdesktop” tool. With this utility you can easily access your remote machine.

First check that whether the rdesktop utility is installed on your linux machine or not ?

We can easily get this utility from internet.

After its installation you can now execute the command rdesktop.

We have to replace the IP address with your remote machine’s IP address. We can also specify the screen size of the remote desktop.

Below is an example where it will be 75 percent of the original size.

#rdesktop –g 75%

This is the easiest method for accessing remote desktop from our Linux desktop.

Enjoy RDC on Linux. There are many other RDC [Remote Desktop Clients] in market, but on Linux Desktop platform this is very good and cool one. Hope you like it.

If you are using Ubuntu Linux as your desktop, then most probably you will get rdesktop installed. Just check in Application–>Internet–>Remote Desktop Viewer

How to setup redundant NFS using DRBD and Heartbeat on Linux Easily – Centos Fedora SuSe Redhat RHEL

Hi all, hope you are enjoying series of articles published on this website for free.. totally free.

In this article you will learn how to setup redundant NFS servers using DRBD (Distributed-Replicated-Block-Devices) Technology. Complete step-by-step procedure is listed below. It works for me so hopefully it will work for your also. No gurantees are given.

1. Two servers with similar storage disk (harddrive) setup (To create a redundant nfs server)
2. One client system/server where the nfs share will be mounted.
3. Static IPs for all servers.

First step is to install CentOS on both machines. During the install process, create a separate blank partition on both machines to be used as your nfs mount. Make sure you creat exact same size partitions on both servers. Set the mount point to /nfsdata during installation.

From this point on i’m going to be referring to both nfs servers by their IPs and hostnames.
server1 will be nfs1 with ip and server2 will be nfs2 with ip Your may use different range of private IPs, so make sure to put in the correct IPs where necessary within this how-to.

Do the following on nfs1( and nfs2(

To view mount points on your system:

vi /etc/fstab

Search for the /nfsdata mount point and comment it out to prevent it form automatically being mounted on boot. Take note of the device for the /nfsdata mount point. Here is what my fstab looks like.

LABEL=/boot /boot ext3 defaults 1 2
#/dev/VolGroup00/LogVol04 /nfsdata ext3 defaults 1 2


If you are going to use external storage as NFS partition then you can set that up after base os installation. It is not mandatory to create nfs partition during OS installation. Also it is best to use LVM for partitioning instead of fix size device. LVM gives greater flexibility.

Now unmount /nfsdata partition if it is mounted as heartbeat will take care of mounting that.

umount /nfsdata

Make sure that ntp and ntpdate are installed on both of the nfs servers.

yum install ntp ntpdate

The time on both servers must be identical. Edit your /etc/ntp.conf file and verify settings.

Now lets check and make sure that the nfs service is not running on startup and that selinux is also turned off.


If you have not installed full operating systems or all administration tools then you can use


and disable firewall and selinux.

Now We will install DRBD and DRBD-Kernel module

Note: If you are installing DRBD and DRBD-kernel on physical system then you will need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-8.0.16-5.el5_3.x86_64.rpm
And if you are using DRBD and DRBD kernel module on virtual machine (VM) then you need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-xen-8.0.16-5.el5_3.x86_64.rpm
You can choose version of package according to your choice.
Along with DRBD and DRBD-kernel module, we will install following rpm’s also.

yum install Perl-TimeDate net-snmp-libs-x86_64

rpm -Uvh heartbeat-pils-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-stonith-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-2.1.4-2.1.x86_64.rpm

Now its time to edit drbd.conf file. /etc/drbd.conf is default location for that config file.

common { syncer { rate 100M; al-extents 257; } }

resource r0 {
protocol C;
handlers { pri-on-incon-degr “halt -f”; }
disk { on-io-error detach; }
startup { degr-wfc-timeout 60; wfc-timeout 60; }

on nfs01 {
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;
on nfs02 {
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;

Let me give you little bit more information about parameters used in above config file.
So lets start from the top.

  • Protocol – This is the method that drbd will use to sync both of the nfs servers. There are 3 available options here, Protocol A, Protocol B and Protocol C.Protocol A is an asynchronous replication protocol. The manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has been placed in the local TCP send buffer. In the event of forced fail-over, data loss may occur. The data on the standby node is consistent after fail-over, however, the most recent updates performed prior to the crash could be lost.”Protocol B is a memory synchronous (semi-synchronous) replication protocol. The manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has reached the peer node. Normally, no writes are lost in case of forced fail-over. However, in the event of simultaneous power failure on both nodes and concurrent, irreversible destruction of the primary’s data store, the most recent writes completed on the primary may be lost.”
  • Protocol C is a synchronous replication protocol. The manual states, “local write operations on the primary node are considered completed only after both the local and the remote disk write have been confirmed. As a result, loss of a single node is guaranteed not to lead to any data loss. Data loss is, of course, inevitable even with this replication protocol if both nodes (or their storage subsystems) are irreversibly destroyed at the same time.

    You may choose your desired protocol but Protocol C is the most commonly used one and it is the safest method.

  • rate – The rate is the maximum speed at which data will be sent from one nfs server to the other while syncing. This should be about a third of your maximum write speed. In my case, I have only a single disk that can write about 45mb/sec so a third of that would be 15mb. This number will usually be much higher for people with raid setups. In some large raid setups, the bottleneck would be the network and not the disks so set the rate accordingly.
  • al-extent – This data on the disk are cut up into slices for synchronization purposes. For each slice there is an al-extent that is used to indicate any changes to that slice. Larger al-extent values make synchronization slower but benefit from less writes to the metadata partition. In my case, I’m using an internal metadata which means the drbd metadata is written to the same parition that my nfs data is on. It would benefit me to have less metadata writes to prevent the disk arm from constantly moving back and forth and degrading performance. If you are using a raid setup and a separate partition for the metadata then set this number lower to benefit from faster synchronization. This number MUST be a prime to gain the most possible performance because it is used in specific hashes that benefit from prime number sized structures.
  • pri-on-incon-degr – The “halt -f” command is executed if the node is primary, degraded and if the data is inconsistent. I use this to make sure drbd is halted when there is some sort of data inconsistency to prevent a major mess from occuring.
  • on-io-error – This allows you to handle low level I/O errors. The method I use is the “detach” method. This is the recommended option by On the occurrence of a lower-level I/O error, the node drops its backing device, and continues in diskless mode.
  • degr-wfc-timeout – This is the amount of time in seconds that is allowed before a connection is timed out. In case a degraded cluster (cluster with only one node left) is rebooted, this timeout value is used instead of wfc-timeout, because the peer is less likely to show up in time, if it had been dead before.

The rest of the config is pretty self explanatory. Replace nfs1 and nfs2 with the hostnames of your nfs servers. To get the hostnames use the following command on both servers:

uname -n

Then replace the disk value with the device name from your fstab file that you commented out. Enter the IP address of each server and use port 7789. The last part is the meta-disk. I used an internal meta-disk because I only have one hard disk in the server and it would not give me any benefit to create a separate partition for the metadata. If you have a raid setup or a separate disk from your data partition that you can use for the meta data than go ahead and create a 150mb partition. Replace the word “internal” in the config file with your device name that you used for the meta data partition.

Now that we finally have our drbd.conf file ready we can move on. Lets go ahead and enable the drbd kernel module.

modprobe drbd

Now that the kernel module is enabled lets start up drbd.

drbdadm up all

This will start drbd, now lets check its status.

cat /proc/drbd

You can always use the above command to check the status of drbd. The above command should show you something like this.

0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:1548 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

You should get some more data before it but the above part is what we are interested in. If you notice it shows that drbd is connected and both nodes are in secondary mode. This is because we have not assigned which node is going to be the primary yet. It also says the data is inconsistent because we have not done the initial sync yet.

I am going to set nfs1 to be my primary node and nfs2 to be my secondary node. If nfs1 fails, nfs2 will takeover but if nfs1 comes back online then all the data from nfs2 will be synced back to nfs1 and nfs1 will take over again.

First of all lets go ahead and delete any data that was created on the /data partition that we setup during our intial OS installation. Be very careful with the command below. Make sure to use the appropriate device because all data on that device will be lost.

dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/LogVol04; sync

Instead of “/dev/VolGroup00/LogVol04″, replace it with your device for the /data parition. Now that the partition is completely erased on both servers, lets create the meta data.

drbdadm create-md r0

Do the following ONLY on nfs1(

Now that the metadata is created, we can move onto assigning a primary node and conducting the initial sync. It is absolutely important that you only execute the following command on the primary node. It doesn’t matter which node you choose to be the primary since they should be identical. In my case, I decided to use nfs1 as the primary.

drbdadm — –overwrite-data-of-peer primary r0

Ok, now we just have to sit back and wait for the initial sync to finish. This is going to take some time to finish even though there is no data on each device, drbd has to sync every single block on /data partition from nfs1 to nfs2. You can check the status by using the following command.

cat /proc/drbd

Do the following on nfs1( and nfs2(

After the initial sync is finished, “cat /proc/drbd” should show something like this.

0: cs:Connected st:Primary/Secondary ld:Consistent
ns:12125 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

If you notice, we are still connected and have a primary and secondary node with consistent data.

Do the following ONLY on nfs1( :

Now lets make an ext3 file system on our drbd device and mount it. Since drbd is running, the ext3 file system will also be created on the secondary node.

mkfs.ext3 /dev/drbd0

The above command will create an ext3 file system on the drbd device. Now lets go ahead and mount it.

mount -t ext3 /dev/drbd0 /data

We know that NFS stores important information in /var/lib/nfs by default which is required to function correctly. In order to preserve file locks and other important information, we need to have that data stored on the drbd device so that if the primary node failes, NFS on the secondary node will continue from right where the primary node left off.

mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export
umount /data

So lets go over what we just did.

  • We have now moved the nfs folder from /var/lib to /data.
  • We created a symbolic link from /var/lib/nfs to /data/nfs since the operating system is still going to look for /var/lib/nfs when nfs is running.
  • We created an export directory in /data to store all the actual data that we are going to use for our nfs share.
  • Finally, we un-mounted the /data partition since we finished what we were doing.

Do the following ONLY on nfs2(
Since we moved the nfs folder to /data, that was synced over to the secondary node as well. We just need to create the symbolic link so that when the /data partition is mounted on nfs2 we have a link to the nfs data.

rm -rf /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs

So we removed the nfs folder and created a symbolic link from /var/lib/nfs to /data/nfs. The symbolic link will be broken since the /data parition is not mounted. Don’t worry about that because in the event of a failover that partiton will be mounted and everything will be fine.

Now we need to configure heartbeat on both nfs servers ns1 and nfs2. we have already installed required softwares.
Create /etc/ha.d/ on both nfs servers with following contents in it

keepalive 2
deadtime 30
bcast eth0
node ukibinfs01 ukibinfs02

Replace names for “node” as per your hostnames. to find out your hostname use uname -n.

Now we need to create “/etc/ha.d/haresources” configuration file on both nfs servers with following configuration

nfs1 IPaddr:: drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfslock nfs
IP address used in haresources config file is floating IP. Out of both NFS server which ever is primary, will have that IP configured on eth0:0.

How to recover mysql [MySQL] root password

You can recover MySQL database server password with following five easy steps.

Step # 1: Stop the MySQL server process.

Step # 2: Start the MySQL (mysqld) server/daemon process with the –skip-grant-tables option so that it will not prompt for password

Step # 3: Connect to mysql server as the root user

Step # 4: Setup new root password

Step # 5: Exit and restart MySQL server

Here are commands you need to type for each step (login as the root user):

Step # 1 : Stop mysql service

# /etc/init.d/mysql stop


Stopping MySQL database server: mysqld.

Step # 2: Start to MySQL server w/o password:

# mysqld_safe --skip-grant-tables &

[1] 5988
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6025]: started

Step # 3: Connect to mysql server using mysql client:

# mysql -u root

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.


Step # 4: Setup new MySQL root user password

mysql> use mysql;
mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';
mysql> flush privileges;

mysql> quit

Step # 5: Stop MySQL Server:

# /etc/init.d/mysql stop

Stopping MySQL database server: mysqld
STOPPING server from pid file /var/run/mysqld/
mysqld_safe[6186]: ended

[1]+  Done                    mysqld_safe --skip-grant-tables

Step # 6: Start MySQL server and test it

# /etc/init.d/mysql start

# mysql -u root -p

Read Only filesystem to read write.

Hi all, Many time we came across systems where root (/) filesystem is mounted read only (ro) and we need to edit some configuration files but we can not until we remount it with read write (rw) priveleges.

Say for example you changed some configuration in /etc/fstab file and next time when you reboot system, it is not able to find /boot or / partitions references listed in fstab file. You are confident and knowing that your data is there on disk but system is not able to mount it correctly and refusing to boot.

In such situation you will be prompted to enter root password to gain access to system or it will ask you to run e2fsck.
In this situation

  • you will also not get any success while running e2fsck as the system is not able to see that partition.
  • When you enter root password, you will get root prompt with full access but that will be readonly (ro)

In this situation you will need to mount filesystem with read write (readwrite ) (rw) permissions to modify configuration or to write anything. Use following command to do so.

mount -o remount,rw /

This will give you full read write permissions on system and you will able to fix it.

The /proc file system

Linux System Configuration and the proc filesystem

The /proc filesystem

The /proc is a virtual filesystem which is getting mounted everything when your system starts. It is used to store many system configuration parameters. It is a filesystem that resides in the kernels memory. Some of the areas in this filesystem cannot be written to by the root user including /proc/sys. Much information here is based on the proc man page. Fro more information refer to that page. Elements of the proc filesystem include:

* Numerical subdirectories getting created for every process. The following files or directories are contained in each processes directory:

  • 1. cmdline – The command line the process was invoked with
    2. cwd – A link to the current working directory of the process
    3. environ – The process environment
    4. exe – A pointer appearing as a symbolic link to the binary that was executed.
    5. fd – A subdirectory with one entry per file that the process has open. 0-std input, 1-std output, 2-std err.
    6. maps – Contains the currently mapped memory regions and their access permissions. The format is:
address perms offset dev inode filename
08048000-0805d000 r-xp 00000000 08:08 81491 /sbin/init
0805d000-0805e000 rwxp 00015000 08:08 81491 /sbin/init
0805e000-081ab000 rwxp 0805e000 00:00 0 [heap]
b7e1d000-b7e1e000 rwxp b7e1d000 00:00 0
b7e1e000-b7f59000 r-xp 00000000 08:08 538659 /lib/tls/i686/cmov/
b7f59000-b7f5a000 r-xp 0013b000 08:08 538659 /lib/tls/i686/cmov/
b7f5a000-b7f5c000 rwxp 0013c000 08:08 538659 /lib/tls/i686/cmov/
b7f5c000-b7f5f000 rwxp b7f5c000 00:00 0
b7f72000-b7f74000 rwxp b7f72000 00:00 0
b7f74000-b7f8d000 r-xp 00000000 08:08 504949 /lib/
b7f8d000-b7f8f000 rwxp 00019000 08:08 504949 /lib/
bfdc0000-bfdd6000 rw-p bfdc0000 00:00 0 [stack]
ffffe000-fffff000 r-xp 00000000 00:00 0 [vdso]

Permission s=private, s=shared
7. mem – The memory of the process that accesses the /dev/mem device
8. root – Points to the root filesystem
9. stat – Status information about the process used by the ps(1) command. Fields are:
1. pid – Process id
2. comm – The executable filename
3. state – R (running), S(sleeping interruptable), D(sleeping), Z(zombie), or T(stopped on a signal).
4. ppid – Parent process ID
5. pgrp – Process group ID
6. session – The process session ID.
7. tty – The tty the process is using
8. tpgid – The process group ID of the owning process of the tty the current process is connected to.
9. flags – Process flags, currently with bugs
10. minflt – Minor faults the process has made
11. cminflt – Minor faults the process and its children have made.
12. majflt
13. cmajflt
14. utime – The number of jiffies (processor time) that this process has been scheduled in user mode
15. stime – in kernel mode
16. cutime – This process and its children in user mode
17. cstime – in kernel mode
18. counter – The maximum time of this processes next time slice.
19. priority – The priority of the nice(1) (process priority) value plus fifteen.
20. timeout – The time in jiffies of the process’s next timeout.
21. itrealvalue – The time in jiffies before the next SIGALRM is sent to the process because of an internal timer.
22. starttime – Time the process started after system boot
23. vsize – Virtual memory size
24. rlim – Current limit in bytes of the rss of the process.
25. startcode – The address above which program text can run.
26. endcode – The address below which program text can run.
27. startstack – The address of the start of the stack
28. kstkesp – The current value of esp for the process as found in the kernel stack page.
29. kstkeip – The current 32 bit instruction pointer, EIP.
30. signal – The bitmap of pending signals
31. blocked – The bitmap of blocked signals
32. sigignore – The bitmap of ignored signals
33. sigcatch – The bitmap of catched signals
34. wchan – The channel in which the process is waiting. The “ps -l” command gives somewhat of a list.

* apm – A file containing the string “1.9 1.2 0x07 0x01 0xff 0x80 -1% -1 ?” on my system.
* bus – A directory
o pci – A directory
+ 00 – A directory containing filenames like 00.0, 07.0, 07.1, 07.2, 08.0, 09.0, and 0b.0. Each are 256 bytes long and appear to be in binary form.
+ devices – I think this file numerically identifies devices on the pci bus. My file contains:

Character devices:
1 mem
2 pty
3 ttyp
4 /dev/vc/0
4 tty
4 ttyS
5 /dev/tty
5 /dev/console
5 /dev/ptmx
6 lp
7 vcs
10 misc
13 input
14 sound
21 sg
29 fb
99 ppdev
116 alsa
128 ptm
136 pts
180 usb
189 usb_device
216 rfcomm
226 drm
253 pcmcia
254 usb_endpoint

Block devices:
1 ramdisk
8 sd
65 sd
66 sd
67 sd
68 sd
69 sd
70 sd
71 sd
128 sd
129 sd
130 sd
131 sd
132 sd
133 sd
134 sd
135 sd

* cmdline – The command line at system startup. My file contains “auto BOOT_IMAGE=rhl ro root=302”.
* cpuinfo – CPU architecture information
* devices – Text listing of major numbers and device groups
* dma – A list of ISA direct memory access channels in use.
* fb – On my system, this file is empty
* filesystems – A text listing of the filesystems compiled into the kernel. The file on my system:

nodev sysfs
nodev rootfs
nodev bdev
nodev proc
nodev cpuset
nodev debugfs
nodev securityfs
nodev sockfs
nodev pipefs
nodev futexfs
nodev tmpfs
nodev inotifyfs
nodev eventpollfs
nodev devpts
nodev ramfs
nodev mqueue
nodev usbfs
nodev fuse
nodev fusectl
nodev binfmt_misc

* fs – A directory
o nfs – A directory
+ exports – A file containing information similar to that in the /etc/exports file. My listing:
# Version 1.0
# Path Client(Flags) # IPs
/tftpboot/lts/ltsroot linux1(ro,no_root_squash,async,wdelay) #
/tftpboot/lts/ltsroot linux3(ro,no_root_squash,async,wdelay) #
/tftpboot/lts/ltsroot linux2(ro,no_root_squash,async,wdelay) #

+ time-diff-margin – A file containing a numerical string value. On my system it is “10”.
* ide – A directory containing information on ide devices.
o drivers – A file describing the ide drivers on the system. My file:

ide-cdrom version 4.54
ide-disk version 1.08

o hda – Symbolic link to ide0/hda
o hdb – Symbolic link to ide0/hdb
o hdd – Symbolic link to ide1/hdd
o ide0 – A directory containing information on the device ide0.
+ channel – A file containing the channel number of the… device? My file contains the number 0.
+ config – A file. My file:

pci bus 00 device 39 vid 8086 did 7111 channel 0
86 80 11 71 05 00 80 02 01 80 01 01 00 40 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
77 e3 30 c0 9b 00 00 00 03 00 22 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 28 0f 00 00 00 00 00 00

+ hda – A directory containing the following files
# cache – The amount of cache capability in KB? On my system it is the number “256”
# capacity – The capacity of the device. On my system it is 12500460 on a 6GB hard drive.
# driver – the type of driver. On my system it is “ide-disk version 1.08”.
# geometry – The disk geometry of the device. On my system:

physical 13228/15/63
logical 778/255/63

# identify – On my system:

0c5a 33ac 0000 000f 0000 0000 003f 0000
0000 0000 3541 4330 3336 3350 2020 2020
2020 2020 2020 2020 0000 0200 0000 332e
3034 2020 2020 5354 3336 3432 3241 2020
2020 2020 2020 2020 2020 2020 2020 2020
2020 2020 2020 2020 2020 2020 2020 8010
0000 2f00 0000 0200 0200 0007 33ac 000f
003f bdec 00be 0010 bdec 00be 0000 0007
0003 0078 0078 00f0 0078 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
001e 0000 3069 4001 4000 3068 0001 4000
0407 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 85c4 00cb 85c4 00cb 2020 0002 0000
0001 0000 0001 0401 0001 0140 0201 0000
3c24 0001 4001 3cb4 0100 0100 0072 3001
0001 0128 0000 0000 1000 0105 00e9 0009
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000

# media – On my system it is “disk”.
# model – The manufacturers model. On my system it is “ST36422A”.
# settings – On my system:

name value min max mode
—- —– — — —-
bios_cyl 778 0 65535 rw
bios_head 255 0 255 rw
bios_sect 63 0 63 rw
breada_readahead 4 0 127 rw
bswap 0 0 1 r
file_readahead 124 0 2097151 rw
io_32bit 0 0 3 rw
keepsettings 0 0 1 rw
max_kb_per_request 64 1 127 rw
multcount 0 0 8 rw
nice1 1 0 1 rw
nowerr 0 0 1 rw
pio_mode write-only 0 255 w
slow 0 0 1 rw
unmaskirq 0 0 1 rw
using_dma 1 0 1 rw

# smart_thresholds – A table of numbers similar to the file identify.
# smart_values – A table of numbers as in smart_thresholds
+ hdb – A directory containing the same files as hda, above.
+ mate – A file containing the string “ide1” on my system which I think is the companion disk device.
+ model – A file containing the string “pci” on my system.
o ide1 – A directory similar to the directory ide0, above with information on the device ide1.
* interrupts – The number of interrupts per IRQ.
* ioports – A list of currently registered input-output port regions that are in use.
* kcore – Represents the physical memory of the system stored in the core format.
* kmsg – This file can be used to log system messages.
* ksyms – Holds the kernel exported symbol definitions used by the modules(X) tools to dynamically link and bind loadable modules.
* loadavg – Load average numbers
* malloc – Present if CONFIGDEBUGMALLOC was defined during kernel compilation.
* locks – The file on my system:

1: POSIX ADVISORY WRITE 29396 08:08:509286 0 EOF
2: POSIX ADVISORY WRITE 6086 08:08:522914 0 EOF
3: POSIX ADVISORY WRITE 5508 08:08:602744 0 EOF
4: POSIX ADVISORY WRITE 5310 00:0f:17832 0 EOF
5: FLOCK ADVISORY WRITE 5194 00:0f:17594 0 EOF
6: POSIX ADVISORY WRITE 5181 00:0f:17583 0 EOF
7: POSIX ADVISORY WRITE 4880 08:08:130879 0 EOF
8: POSIX ADVISORY WRITE 4880 08:08:130878 0 EOF
9: POSIX ADVISORY WRITE 4880 08:08:130877 0 EOF
10: FLOCK ADVISORY WRITE 4780 00:0f:16731 0 EOF
11: FLOCK ADVISORY WRITE 4778 00:0f:16720 0 EOF

* mdstat – The file on my system:

Personalities :
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive

* meminfo – Used by free(1) to report memory usage.
* misc – The file on my system:

135 rtc
134 apm
1 psaux

* modules – A list of kernel modules loaded by the system
* mounts – Shows mounted filesystems. Shows device, mount point, filesystem type, permissions, and two flags. The file on my system:

/dev/root / ext2 rw 0 0
/proc /proc proc rw 0 0
/dev/hdb1 /data vfat rw 0 0
/dev/hda1 /dos vfat rw 0 0
/dev/hda3 /slackw ext2 rw 0 0
none /dev/pts devpts rw 0 0
automount(pid640) /mnt autofs rw 0 0
ENG_SRV/MYUSER /eng_srv ncpfs rw 0 0

* mtrr – The file on my system:

reg00: base=0x000a0000 ( 0MB), size= 128kB: write-combining, count=1
reg01: base=0x000c0000 ( 0MB), size= 256kB: uncachable, count=1
reg03: base=0x000a8000 ( 0MB), size= 32kB: write-combining, count=1
reg07: base=0x00000000 ( 0MB), size= 64MB: write-back, count=1

* net – Various network pseudo files. The netstat(8) command suite provides cleaner access to these files. Files:
1. arp – The kernel address resolution protocol table.
2. dev – Network device status information
3. ipx
4. ipx_route
5. rarp – used to provide rarp(8) services.
6. raw – A dump of the RAW socket table
7. route – Looks like route(8).
8. snmp – Holds the ASCII databases used for the IP, ICMP, TCP, and UDP management information bases for an snmp agent.
9. tcp – A dump of the TCP socket table.
10. udp – A dump of the UDP socket table
11. unix – Lists UNIX domain sockets and their status.
* partitions – Lists the partitions and their device major and minor numbers. The file on my system:

major minor #blocks name

3 0 6250230 hda
3 1 208813 hda1
3 2 3068415 hda2
3 3 2843505 hda3
3 4 128520 hda4
3 64 6250230 hdb
3 65 6249253 hdb1
22 64 1073741823 hdd

* pci – A listing of all PCI devices that the system is aware of.
* rtc – A file containing clock information. The file on my system:

rtc_time : 20:15:03
rtc_date : 2000-05-07
rtc_epoch : 1900
alarm : 16:29:44
DST_enable : no
BCD : yes
24hr : yes
square_wave : no
alarm_IRQ : no
update_IRQ : no
periodic_IRQ : no
periodic_freq : 1024
batt_status : okay

* scsi – A directory with scsi files and driver directories.
1. scsi – A list of all scsi devices known to the kernel
2. drivername – Various scsi driver brand names
* self – Refers to the /proc filesystem.
* slabinfo – The file on my system

slabinfo – version: 1.0
kmem_cache 29 42
pio_request 0 0
tcp_tw_bucket 0 42
tcp_bind_bucket 41 127
tcp_open_request 0 0
skbuff_head_cache 64 147
sock 150 242
dquot 0 0
filp 1505 1512
signal_queue 0 0
buffer_head 566 1428
mm_struct 65 93
vm_area_struct 2527 3528
dentry_cache 4704 4743
files_cache 72 99
uid_cache 4 127
size-131072 0 0
size-65536 0 0
size-32768 0 0
size-16384 16 16
size-8192 0 1
size-4096 3 8
size-2048 151 176
size-1024 20 32
size-512 37 72
size-256 33 70
size-128 546 700
size-64 192 210
size-32 1080 1197
slab_cache 78 126

* stat – kernel statistics subdirectory
1. cpu – Jiffies spent in user mode, user mode with low priority, system mode, and idle.
2. disk – Four disk entries not yet implemented
3. page – The number of pages the system paged in and out.
4. swap – Swap pages that have been brought in and out.
5. intr – The number of interrupts received form the system boot
6. ctxt – The number of context switches that the system underwent.
7. btime – Boot time in seconds since Jan 1, 1970.
* swaps – A file defining swap partitions. The file on my system:

Filename Type Size Used Priority
/dev/hda4 partition 128516 7600 -1

* sys – Directory corresponding to kernel variables
o debug
o dev
o fs
o kernel
+ domainname
+ file-max
+ file-nr
+ hostname
+ inode-max
+ inode-nr
+ osrelease
+ ostype
+ panic
+ real-root-dev
+ securelevel
+ version
o net
o proc
o sunrpc
o vm
* tty
o driver – A directory
+ serial – A file
o drivers – A file listing device drivers. The file on my system:

pty_slave /dev/pts 136 0-255 pty:slave
pty_master /dev/ptm 128 0-255 pty:master
pty_slave /dev/ttyp 3 0-255 pty:slave
pty_master /dev/pty 2 0-255 pty:master
serial /dev/cua 5 64-95 serial:callout
serial /dev/ttyS 4 64-95 serial
/dev/tty0 /dev/tty0 4 0 system:vtmaster
/dev/ptmx /dev/ptmx 5 2 system
/dev/console /dev/console 5 1 system:console
/dev/tty /dev/tty 5 0 system:/dev/tty
unknown /dev/tty 4 1-63 console

o ldisc – A directory containing no files on my system.
o ldiscs – The file on my system:

n_tty 0

* uptime
* version

The sysctl tool

This tool is worth mentioning in this section since it is used to manipulate kernel parameters. If you type “sysctl -a |more” you will see a long list of kernel parameters. You can use this sysctl program to modify these parameters. However, I have been unable to add new parameters.