What is Difference between worker and prefork

Apache (HTPD) is  very popular and widely deployed web server arround the world. A-Patchy server comes with multiple modules. The term MPM is used for multiprocessing module. We can check for default mpm by running this command “ httpd -l ”

Apache 2 is available with following 2 MPM modules.

PREFORK
WORKER

(mpm_winnt This Multi-Processing Module is optimized for Windows NT.)
(mpm_netware Multi-Processing Module implementing an exclusively threaded web server optimized for Novell NetWare)

A) Prefork MPM

A prefork mpm handles http requests just like older Apache 1.3. As the name suggests it will pre-fork necessary child process while starting Apache. It is suitable for all those websites which don’t want threading for compatibility. i.e for non-thread-safe libraries . It is also known as the best MPM for isolating each incoming http request.

How it works: – A single control (master) process is responsible for launching multiple child processes which serves incoming http requests. Apache always tries to maintain several spare (not-in-use) server processes, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new child processes to be forked before their requests can be served.
We can adjust this spare process through the Apache configuration. Default settings are usually enough for small amount of traffic. One can always tune those Directives / Values as per their requirements.

Pre-Fork is the default module given by Apache.

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      256
MaxClients       256
MaxRequestsPerChild  4000
</IfModule>

B) Worker MPM

A worker mpm is an Multi-Processing Module (MPM) which implements a hybrid multi-process multi-threaded server. By using threads to serve requests, it is able to serve a large number of requests with fewer system resources than a process-based server.

The most important directives used to control this MPM are ThreadsPerChild, which controls the number of threads deployed by each child process and MaxClients, which controls the maximum total number of threads that may be launched.

Strength : Memory usage and performance wise its better than prefork
Weakness : worker will not work properly with languages like php

How it works : – A single control process (the parent) is responsible for launching child processes. Each child process creates a fixed number of server threads as specified in the ThreadsPerChild directive, as well as a listener thread which listens for connections and passes them to a server thread for processing when they arrive.

Apache always tries to maintain a group of spare or idle server threads, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new threads or processes to be created before their requests can be served. The number of processes that will initially launched is set by the StartServers directive. During operation, Apache assesses the total number of idle threads in all processes, and forks or kills processes to keep this number within the boundaries specified by MinSpareThreads and MaxSpareThreads. Since this process is very self-regulating, it is rarely necessary to modify these directives from their default values. The maximum number of clients that may be served simultaneously (i.e., the maximum total number of threads in all processes) is determined by the MaxClients directive. The maximum number of active child processes is determined by the MaxClients directive divided by the ThreadsPerChild directive

# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule worker.c>
StartServers         4
MaxClients         300
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

Apache / HTTPD : No space left on device: Cannot create SSLMutex

It is true that life teaches you new lesson every day… Yesterday for first time I came across the server where I was unable to restart apache / httpd service on server. It looked bit strange but after checking error.log if found following errors ..

Apache: No space left on device: Cannot create SSLMutex

After searching on web I found that Apache is leaving a bunch of stray semaphore sets lying around after an attempted restart of httpd / apache. In lay man’s term “semaphore” is a dead object in memory or locked process in operation… huh !!!! Don’t worry, there is a way out for this .. we need to list and grep those processes (dead processes)  and terminate all such locked instances of apache. Use following command to list those processes.

ipcs -s | grep apache

Most likely you will see a fairly large list here. You need too, and it is safe too, have these deleted. The following command will again do the trick:

ipcs -s | grep apache | awk ' { print $2 } ' | xargs -n 1 ipcrm -s 

Note: If your apache is running as nobody or another user, be sure to substitute that other user in place of  apache above.
————————————————————————————————————-

;Cannot create SSLMutex solution

;
At the heart of the problem, is most likely a poorly configured Apache server. By default, SSMutex is configured to the default setting, as it was on this one server of ours. If you read the Apache.org pages for mod_ssl configuration, they have this to say about the default setting:

;

The following Mutex types are available:

none | no

This is the default where no Mutex is used at all. Use it at your own risk. But because currently the Mutex is mainly used for synchronizing write access to the SSL Session Cache you can live without it as long as you accept a sometimes garbled Session Cache. So it’s not recommended to leave this the default. Instead configure a real Mutex.

;

There are of course optional configuration settings. At the very least, it is suggested that you set SSLMutex to sem, which will let Apache choose which SSLMutex type to use.

You will most likely find this setting in the ssl.conf file located at /etc/httpd/conf.d.

Automate execution of shell scripts owned by non-root users at boot

Hi all, This is really going to be useful. many times we need to execute some commands or shell scripts as non-root user at the time of booting. say for example you need to mount samba share to be mounted as non-root user at time of system boot. Many people do argue that why you need to mount share as non-root user … A good answer for that argument is, not all applications are running as root user on server. For security reasons it is a good practice to have different non-root users as owner of different applications.

Follow these steps to execute a single command as non-root user at time of bootinig in unix/linux.

  1. edit your rc.local script as this is the script which will be execute immediately after booting. usually it is available under /etc/ directory. type following line of command to mount a external samba share as non-root user.su – {userid} -c {COMMAND}If you think you have more arguments in your command line or you have more then 1 command to be executed then put them alltogether in one shell script and then usesu – {userid} -s {shell-script}

    e.g. you need to mount two mounts on single linux servers and those two mounts are on different servers then your shell-script called myscript will become something like. also it is good idea to keep that script in users home directory or in a directory which is accessible by that non-root user other wise it will not work.


    #!/bin/bash
    /usr/bin/smbmount //{first server name OR IP address}/{share name} {first local path to mount} -o username,password rw
    /usr/bin/smbmount //{second server name OR IP address}/{share name} {second local path to mount} -o username,password rw
    exit;
    And for this case your entry in /etc/rc.local will be

    su – {non-root-user} -s {/home/non-root-user/myscript}

Make sure that non root user have valid shell available other wise this will not work

Configure Sendmail to log “Subject” line for each email sent.

Hi Friends,
In this article we will learn how to configure sendmail so that you can log “Subject” in /var/log/maillog as by default sendmail does not log Subject to maillog file.

This is really interesting. Business people many times are interested in getting mail log files analyzed. To analyse mail logs they need various field to appera in mail logs. e.g. “From”“To”“Subject” etc from the sent email. By default sendmail logs From and To fields but it does not log Subject field. In this article you will learn how to enable sendmail to log “Subject”.

  • First of all take backup of your “sendmail.mc” and “sendmail.cf” files. Default location for those files is /etc/mail.
  • Now open “sendmail.mc” in your favourite editor and add following lines in it and save it. I usually prefere to add it at boottom of file so you can easily identify your modifications.

LOCAL_CONFIG
Klog syslog
HSubject: $>+CheckSubject

LOCAL_RULESETS
SCheckSubject
R$* Press TAB Key $: $(log Subject: $1 $) $1

This last line is very crusial. after R$* press tab key as suggested. Then after $: press space key.
After log there is a space key, after Subject: there is a space key after $1 there is a space key. After closing bracket there is a space key before $1.

  1. Now you need to regenerate sendmail.cf file. use m4 macro to do so. 

    #m4 sendmail.mc > sendmail.cf

  • Now restart sendmail and you verify your maillog file. You will see Subject line for each mail which is being sent from or relayed from your email server

UMASK for sftp users / connections – Linux / centos / fedora / ubuntu

As Internet is now growing like anything, various requirements / demands come across. recently i was setting up a web development server where multiple developers were required to add / edit / update files on same directory ….. hmmm .. Initially i thought i will create a group and add those developers in that group and everything will be done .. but as default UMASK on LINUX server is set to 0022, it dosen’t grant write permissions to group by default.. Hence I need to change UMASK to 0002 for those sftp users… and here you go ..There are multiple ways to achieve this .. first way to set up a shell script which will start sftp sub system with umask 0002 …

Create following shell script:

#!/bin/bash

umask 0002

# The path to your sftp-server binary may differ
exec /usr/libexec/openssh/sftp-server

Then I pointed the Subsystem directive in the sshd_config file to my script:

Subsystem       sftp    /opt/sftp-server-script.sh

A quick restart/reload of the sshd configuration and I was in business. Both users could see and edit each others files. Email or comment with questions.

—Second option :
Or even simpler still as @Gilles pointed out in the comments you can do away with the wrapper script entirely and simply change the Subsystem line in your sshd_config to this:

Subsystem sftp /bin/sh -c ‘umask 0002; /usr/libexec/openssh/sftp-server’

Thanks Mate… much appreciated.

—Third Option
There is a new flag for the sftp-server, ‘-u’, that allows you to directly set the umask, overriding the user umask. So to use it, just do this:

Subsystem sftp /usr/libexec/openssh/sftp-server -u 0002

What is shell ?

Shell is a program which allows users to interact with computers.

Shell is an command language interpreter that executes commands read from the standard input device (keyboard) or from a file. Several shells available fo Linux and UNIX operating systems are :

SH (Bourne Shell) Old Unix Shell.
BASH (Bourne-Again Shell) GNU
CSH (C Shell) BSD
TCSH (Popular extension of C Shell)
KSH (Korn Shell) Bell Labs
ZSH (Popular Extension of Korn Shell)
RSH (Remote Shell) TCP/IP

User’s use keyboard to send commands to system. The interface they are using is called CLI (Command Line Interface). If you are normal user (non administrator) prompt is called
“$” prompt (dollar prompt). If you are administrator (super user) then prompt is called
“#” prompt (pound /hash).

If you want to know how many shells are supported by your system then use following command.
$cat /etc/shells
OR
#cat /etc/shells
Note : Above command will display content of file “/etc/shells” on your screen.

How do I find out what shell I’m using?

As We mentioned earlier that shell is a program which allows users to interact with system, we can find out which shell you are using right now using “ps” command with -p switch.
ps -p $$
So what is $ argument passed to -p option? Remember $ returns the PID (process identification number) of the current process, and the current process is your shell. So running a ps on that number displays a process status listing of your shell. In that listing you will find the name of your shell (look for CMD column) .

nilesh@gnulinux:~$ ps -p $$
  PID TTY          TIME CMD
 3301 pts/0    00:00:00 bash

What is shell ? Unix / Linux / centos / redhat / suse / fefora / bsd / solaris / Aix
How to find out which shell I am using ?
what is shell prompt ? what is difference between $ (dollar) and #(pound)(hash) prompt?
About shell, command interpreter program.
What is shell script?

How to setup redundant NFS Servers. Heartbeat-DRBD-NFS. centos/redhat/ubuntu/debian

Hi all, hope you are enjoying series of articles published on this website for free.. totally free.

In this article you will learn how to setup redundant NFS servers using DRBD (Distributed-Replicated-Block-Devices) Technology. Complete step-by-step procedure is listed below. It works for me so hopefully it will work for your also. No gurantees are given.

1. Two servers with similar storage disk (harddrive) setup (To create a redundant nfs server)
2. One client system/server where the nfs share will be mounted.
3. Static IPs for all servers.

First step is to install CentOS on both machines. During the install process, create a separate blank partition on both machines to be used as your nfs mount. Make sure you creat exact same size partitions on both servers. Set the mount point to /nfsdata during installation.

From this point on i’m going to be referring to both nfs servers by their IPs and hostnames.
server1 will be nfs1 with ip 10.10.10.1 and server2 will be nfs2 with ip 10.10.10.2. Your may use different range of private IPs, so make sure to put in the correct IPs where necessary within this how-to.

Do the following on nfs1(10.10.10.1) and nfs2(10.10.10.2):

To view mount points on your system:

vi /etc/fstab

Search for the /nfsdata mount point and comment it out to prevent it form automatically being mounted on boot. Take note of the device for the /nfsdata mount point. Here is what my fstab looks like.

LABEL=/boot /boot ext3 defaults 1 2
#/dev/VolGroup00/LogVol04 /nfsdata ext3 defaults 1 2
……

Note:

If you are going to use external storage as NFS partition then you can set that up after base os installation. It is not mandatory to create nfs partition during OS installation. Also it is best to use LVM for partitioning instead of fix size device. LVM gives greater flexibility.

Now unmount /nfsdata partition if it is mounted as heartbeat will take care of mounting that.

umount /nfsdata

Make sure that ntp and ntpdate are installed on both of the nfs servers.

yum install ntp ntpdate

The time on both servers must be identical. Edit your /etc/ntp.conf file and verify settings.

Now lets check and make sure that the nfs service is not running on startup and that selinux is also turned off.

setup

If you have not installed full operating systems or all administration tools then you can use

system-config-securitylevel-tui

and disable firewall and selinux.

Now We will install DRBD and DRBD-Kernel module

Note: If you are installing DRBD and DRBD-kernel on physical system then you will need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-8.0.16-5.el5_3.x86_64.rpm
And if you are using DRBD and DRBD kernel module on virtual machine (VM) then you need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-xen-8.0.16-5.el5_3.x86_64.rpm
You can choose version of package according to your choice.
Along with DRBD and DRBD-kernel module, we will install following rpm’s also.

yum install Perl-TimeDate net-snmp-libs-x86_64

rpm -Uvh heartbeat-pils-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-stonith-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-2.1.4-2.1.x86_64.rpm

Now its time to edit drbd.conf file. /etc/drbd.conf is default location for that config file.

common { syncer { rate 100M; al-extents 257; } }

resource r0 {
protocol C;
handlers { pri-on-incon-degr “halt -f”; }
disk { on-io-error detach; }
startup { degr-wfc-timeout 60; wfc-timeout 60; }

on nfs01 {
address 10.10.10.1:7789;
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;
}
on nfs02 {
address 10.10.10.2:7789;
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;
}
}

Let me give you little bit more information about parameters used in above config file.
So lets start from the top.

  • Protocol – This is the method that drbd will use to sync both of the nfs servers. There are 3 available options here, Protocol A, Protocol B and Protocol C.Protocol A is an asynchronous replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has been placed in the local TCP send buffer. In the event of forced fail-over, data loss may occur. The data on the standby node is consistent after fail-over, however, the most recent updates performed prior to the crash could be lost.”Protocol B is a memory synchronous (semi-synchronous) replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has reached the peer node. Normally, no writes are lost in case of forced fail-over. However, in the event of simultaneous power failure on both nodes and concurrent, irreversible destruction of the primary’s data store, the most recent writes completed on the primary may be lost.”

Protocol C is a synchronous replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed only after both the local and the remote disk write have been confirmed. As a result, loss of a single node is guaranteed not to lead to any data loss. Data loss is, of course, inevitable even with this replication protocol if both nodes (or their storage subsystems) are irreversibly destroyed at the same time.

You may choose your desired protocol but Protocol C is the most commonly used one and it is the safest method.

  • rate – The rate is the maximum speed at which data will be sent from one nfs server to the other while syncing. This should be about a third of your maximum write speed. In my case, I have only a single disk that can write about 45mb/sec so a third of that would be 15mb. This number will usually be much higher for people with raid setups. In some large raid setups, the bottleneck would be the network and not the disks so set the rate accordingly.
  • al-extent – This data on the disk are cut up into slices for synchronization purposes. For each slice there is an al-extent that is used to indicate any changes to that slice. Larger al-extent values make synchronization slower but benefit from less writes to the metadata partition. In my case, I’m using an internal metadata which means the drbd metadata is written to the same parition that my nfs data is on. It would benefit me to have less metadata writes to prevent the disk arm from constantly moving back and forth and degrading performance. If you are using a raid setup and a separate partition for the metadata then set this number lower to benefit from faster synchronization. This number MUST be a prime to gain the most possible performance because it is used in specific hashes that benefit from prime number sized structures.
  • pri-on-incon-degr – The “halt -f” command is executed if the node is primary, degraded and if the data is inconsistent. I use this to make sure drbd is halted when there is some sort of data inconsistency to prevent a major mess from occuring.
  • on-io-error – This allows you to handle low level I/O errors. The method I use is the “detach” method. This is the recommended option by drbd.org. On the occurrence of a lower-level I/O error, the node drops its backing device, and continues in diskless mode.
  • degr-wfc-timeout – This is the amount of time in seconds that is allowed before a connection is timed out. In case a degraded cluster (cluster with only one node left) is rebooted, this timeout value is used instead of wfc-timeout, because the peer is less likely to show up in time, if it had been dead before.

The rest of the config is pretty self explanatory. Replace nfs1 and nfs2 with the hostnames of your nfs servers. To get the hostnames use the following command on both servers:

uname -n

Then replace the disk value with the device name from your fstab file that you commented out. Enter the IP address of each server and use port 7789. The last part is the meta-disk. I used an internal meta-disk because I only have one hard disk in the server and it would not give me any benefit to create a separate partition for the metadata. If you have a raid setup or a separate disk from your data partition that you can use for the meta data than go ahead and create a 150mb partition. Replace the word “internal” in the config file with your device name that you used for the meta data partition.

Now that we finally have our drbd.conf file ready we can move on. Lets go ahead and enable the drbd kernel module.

modprobe drbd

Now that the kernel module is enabled lets start up drbd.

drbdadm up all

This will start drbd, now lets check its status.

cat /proc/drbd

You can always use the above command to check the status of drbd. The above command should show you something like this.

0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:1548 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

You should get some more data before it but the above part is what we are interested in. If you notice it shows that drbd is connected and both nodes are in secondary mode. This is because we have not assigned which node is going to be the primary yet. It also says the data is inconsistent because we have not done the initial sync yet.

I am going to set nfs1 to be my primary node and nfs2 to be my secondary node. If nfs1 fails, nfs2 will takeover but if nfs1 comes back online then all the data from nfs2 will be synced back to nfs1 and nfs1 will take over again.

First of all lets go ahead and delete any data that was created on the /data partition that we setup during our intial OS installation. Be very careful with the command below. Make sure to use the appropriate device because all data on that device will be lost.

dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/LogVol04; sync

Instead of “/dev/VolGroup00/LogVol04″, replace it with your device for the /data parition. Now that the partition is completely erased on both servers, lets create the meta data.

drbdadm create-md r0

Do the following ONLY on nfs1(10.10.10.1)

Now that the metadata is created, we can move onto assigning a primary node and conducting the initial sync. It is absolutely important that you only execute the following command on the primary node. It doesn’t matter which node you choose to be the primary since they should be identical. In my case, I decided to use nfs1 as the primary.

drbdadm — –overwrite-data-of-peer primary r0

Ok, now we just have to sit back and wait for the initial sync to finish. This is going to take some time to finish even though there is no data on each device, drbd has to sync every single block on /data partition from nfs1 to nfs2. You can check the status by using the following command.

cat /proc/drbd

Do the following on nfs1(10.10.10.1) and nfs2(10.10.10.2):

After the initial sync is finished, “cat /proc/drbd” should show something like this.

0: cs:Connected st:Primary/Secondary ld:Consistent
ns:12125 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

If you notice, we are still connected and have a primary and secondary node with consistent data.

Do the following ONLY on nfs1(10.10.10.1) :

Now lets make an ext3 file system on our drbd device and mount it. Since drbd is running, the ext3 file system will also be created on the secondary node.

mkfs.ext3 /dev/drbd0

The above command will create an ext3 file system on the drbd device. Now lets go ahead and mount it.

mount -t ext3 /dev/drbd0 /data

We know that NFS stores important information in /var/lib/nfs by default which is required to function correctly. In order to preserve file locks and other important information, we need to have that data stored on the drbd device so that if the primary node failes, NFS on the secondary node will continue from right where the primary node left off.

mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export
umount /data

So lets go over what we just did.

  • We have now moved the nfs folder from /var/lib to /data.
  • We created a symbolic link from /var/lib/nfs to /data/nfs since the operating system is still going to look for /var/lib/nfs when nfs is running.
  • We created an export directory in /data to store all the actual data that we are going to use for our nfs share.
  • Finally, we un-mounted the /data partition since we finished what we were doing.

Do the following ONLY on nfs2(10.10.10.2):
Since we moved the nfs folder to /data, that was synced over to the secondary node as well. We just need to create the symbolic link so that when the /data partition is mounted on nfs2 we have a link to the nfs data.

rm -rf /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs

So we removed the nfs folder and created a symbolic link from /var/lib/nfs to /data/nfs. The symbolic link will be broken since the /data parition is not mounted. Don’t worry about that because in the event of a failover that partiton will be mounted and everything will be fine.

Now we need to configure heartbeat on both nfs servers ns1 and nfs2. we have already installed required softwares.
Create /etc/ha.d/ha.cf on both nfs servers with following contents in it

keepalive 2
deadtime 30
bcast eth0
node ukibinfs01 ukibinfs02

Replace names for “node” as per your hostnames. to find out your hostname use uname -n.

Now we need to create “/etc/ha.d/haresources” configuration file on both nfs servers with following configuration

nfs1 IPaddr::10.10.10.3/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfslock nfs
IP address used in haresources config file is floating IP. Out of both NFS server which ever is primary, will have that IP configured on eth0:0.

Hi all, hope you are enjoying series of articles published on this website for free.. totally free.

In this article you will learn how to setup redundant NFS servers using DRBD (Distributed-Replicated-Block-Devices) Technology. Complete step-by-step procedure is listed below. It works for me so hopefully it will work for your also. No gurantees are given.

1. Two servers with similar storage disk (harddrive) setup (To create a redundant nfs server)
2. One client system/server where the nfs share will be mounted.
3. Static IPs for all servers.

First step is to install CentOS on both machines. During the install process, create a separate blank partition on both machines to be used as your nfs mount. Make sure you creat exact same size partitions on both servers. Set the mount point to /nfsdata during installation.

From this point on i’m going to be referring to both nfs servers by their IPs and hostnames.
server1 will be nfs1 with ip 10.10.10.1 and server2 will be nfs2 with ip 10.10.10.2. Your may use different range of private IPs, so make sure to put in the correct IPs where necessary within this how-to.

Do the following on nfs1(10.10.10.1) and nfs2(10.10.10.2):

To view mount points on your system:

vi /etc/fstab

Search for the /nfsdata mount point and comment it out to prevent it form automatically being mounted on boot. Take note of the device for the /nfsdata mount point. Here is what my fstab looks like.

LABEL=/boot /boot ext3 defaults 1 2
#/dev/VolGroup00/LogVol04 /nfsdata ext3 defaults 1 2
……

Note:

If you are going to use external storage as NFS partition then you can set that up after base os installation. It is not mandatory to create nfs partition during OS installation. Also it is best to use LVM for partitioning instead of fix size device. LVM gives greater flexibility.

Now unmount /nfsdata partition if it is mounted as heartbeat will take care of mounting that.

umount /nfsdata

Make sure that ntp and ntpdate are installed on both of the nfs servers.

yum install ntp ntpdate

The time on both servers must be identical. Edit your /etc/ntp.conf file and verify settings.

Now lets check and make sure that the nfs service is not running on startup and that selinux is also turned off.

setup

If you have not installed full operating systems or all administration tools then you can use

system-config-securitylevel-tui

and disable firewall and selinux.

Now We will install DRBD and DRBD-Kernel module

Note: If you are installing DRBD and DRBD-kernel on physical system then you will need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-8.0.16-5.el5_3.x86_64.rpm
And if you are using DRBD and DRBD kernel module on virtual machine (VM) then you need to install:
drbd-8.0.16-5.el5.centos.x86_64.rpm AND kmod-drbd-xen-8.0.16-5.el5_3.x86_64.rpm
You can choose version of package according to your choice.
Along with DRBD and DRBD-kernel module, we will install following rpm’s also.

yum install Perl-TimeDate net-snmp-libs-x86_64

rpm -Uvh heartbeat-pils-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-stonith-2.1.4-2.1.x86_64.rpm

rpm -Uvh heartbeat-2.1.4-2.1.x86_64.rpm

Now its time to edit drbd.conf file. /etc/drbd.conf is default location for that config file.

common { syncer { rate 100M; al-extents 257; } }

resource r0 {
protocol C;
handlers { pri-on-incon-degr “halt -f”; }
disk { on-io-error detach; }
startup { degr-wfc-timeout 60; wfc-timeout 60; }

on nfs01 {
address 10.10.10.1:7789;
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;
}
on nfs02 {
address 10.10.10.2:7789;
device /dev/drbd0;
disk /dev/LogVol04/nfsdrbd;
meta-disk internal;
}
}

Let me give you little bit more information about parameters used in above config file.
So lets start from the top.

  • Protocol – This is the method that drbd will use to sync both of the nfs servers. There are 3 available options here, Protocol A, Protocol B and Protocol C.Protocol A is an asynchronous replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has been placed in the local TCP send buffer. In the event of forced fail-over, data loss may occur. The data on the standby node is consistent after fail-over, however, the most recent updates performed prior to the crash could be lost.”Protocol B is a memory synchronous (semi-synchronous) replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed as soon as the local disk write has occurred, and the replication packet has reached the peer node. Normally, no writes are lost in case of forced fail-over. However, in the event of simultaneous power failure on both nodes and concurrent, irreversible destruction of the primary’s data store, the most recent writes completed on the primary may be lost.”

Protocol C is a synchronous replication protocol. The drbd.org manual states, “local write operations on the primary node are considered completed only after both the local and the remote disk write have been confirmed. As a result, loss of a single node is guaranteed not to lead to any data loss. Data loss is, of course, inevitable even with this replication protocol if both nodes (or their storage subsystems) are irreversibly destroyed at the same time.

You may choose your desired protocol but Protocol C is the most commonly used one and it is the safest method.

  • rate – The rate is the maximum speed at which data will be sent from one nfs server to the other while syncing. This should be about a third of your maximum write speed. In my case, I have only a single disk that can write about 45mb/sec so a third of that would be 15mb. This number will usually be much higher for people with raid setups. In some large raid setups, the bottleneck would be the network and not the disks so set the rate accordingly.
  • al-extent – This data on the disk are cut up into slices for synchronization purposes. For each slice there is an al-extent that is used to indicate any changes to that slice. Larger al-extent values make synchronization slower but benefit from less writes to the metadata partition. In my case, I’m using an internal metadata which means the drbd metadata is written to the same parition that my nfs data is on. It would benefit me to have less metadata writes to prevent the disk arm from constantly moving back and forth and degrading performance. If you are using a raid setup and a separate partition for the metadata then set this number lower to benefit from faster synchronization. This number MUST be a prime to gain the most possible performance because it is used in specific hashes that benefit from prime number sized structures.
  • pri-on-incon-degr – The “halt -f” command is executed if the node is primary, degraded and if the data is inconsistent. I use this to make sure drbd is halted when there is some sort of data inconsistency to prevent a major mess from occuring.
  • on-io-error – This allows you to handle low level I/O errors. The method I use is the “detach” method. This is the recommended option by drbd.org. On the occurrence of a lower-level I/O error, the node drops its backing device, and continues in diskless mode.
  • degr-wfc-timeout – This is the amount of time in seconds that is allowed before a connection is timed out. In case a degraded cluster (cluster with only one node left) is rebooted, this timeout value is used instead of wfc-timeout, because the peer is less likely to show up in time, if it had been dead before.

The rest of the config is pretty self explanatory. Replace nfs1 and nfs2 with the hostnames of your nfs servers. To get the hostnames use the following command on both servers:

uname -n

Then replace the disk value with the device name from your fstab file that you commented out. Enter the IP address of each server and use port 7789. The last part is the meta-disk. I used an internal meta-disk because I only have one hard disk in the server and it would not give me any benefit to create a separate partition for the metadata. If you have a raid setup or a separate disk from your data partition that you can use for the meta data than go ahead and create a 150mb partition. Replace the word “internal” in the config file with your device name that you used for the meta data partition.

Now that we finally have our drbd.conf file ready we can move on. Lets go ahead and enable the drbd kernel module.

modprobe drbd

Now that the kernel module is enabled lets start up drbd.

drbdadm up all

This will start drbd, now lets check its status.

cat /proc/drbd

You can always use the above command to check the status of drbd. The above command should show you something like this.

0: cs:Connected st:Secondary/Secondary ld:Inconsistent
ns:0 nr:0 dw:0 dr:0 al:0 bm:1548 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

You should get some more data before it but the above part is what we are interested in. If you notice it shows that drbd is connected and both nodes are in secondary mode. This is because we have not assigned which node is going to be the primary yet. It also says the data is inconsistent because we have not done the initial sync yet.

I am going to set nfs1 to be my primary node and nfs2 to be my secondary node. If nfs1 fails, nfs2 will takeover but if nfs1 comes back online then all the data from nfs2 will be synced back to nfs1 and nfs1 will take over again.

First of all lets go ahead and delete any data that was created on the /data partition that we setup during our intial OS installation. Be very careful with the command below. Make sure to use the appropriate device because all data on that device will be lost.

dd if=/dev/zero bs=1M count=1 of=/dev/VolGroup00/LogVol04; sync

Instead of “/dev/VolGroup00/LogVol04″, replace it with your device for the /data parition. Now that the partition is completely erased on both servers, lets create the meta data.

drbdadm create-md r0

Do the following ONLY on nfs1(10.10.10.1)

Now that the metadata is created, we can move onto assigning a primary node and conducting the initial sync. It is absolutely important that you only execute the following command on the primary node. It doesn’t matter which node you choose to be the primary since they should be identical. In my case, I decided to use nfs1 as the primary.

drbdadm — –overwrite-data-of-peer primary r0

Ok, now we just have to sit back and wait for the initial sync to finish. This is going to take some time to finish even though there is no data on each device, drbd has to sync every single block on /data partition from nfs1 to nfs2. You can check the status by using the following command.

cat /proc/drbd

Do the following on nfs1(10.10.10.1) and nfs2(10.10.10.2):

After the initial sync is finished, “cat /proc/drbd” should show something like this.

0: cs:Connected st:Primary/Secondary ld:Consistent
ns:12125 nr:0 dw:0 dr:49035 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
1: cs:Unconfigured

If you notice, we are still connected and have a primary and secondary node with consistent data.

Do the following ONLY on nfs1(10.10.10.1) :

Now lets make an ext3 file system on our drbd device and mount it. Since drbd is running, the ext3 file system will also be created on the secondary node.

mkfs.ext3 /dev/drbd0

The above command will create an ext3 file system on the drbd device. Now lets go ahead and mount it.

mount -t ext3 /dev/drbd0 /data

We know that NFS stores important information in /var/lib/nfs by default which is required to function correctly. In order to preserve file locks and other important information, we need to have that data stored on the drbd device so that if the primary node failes, NFS on the secondary node will continue from right where the primary node left off.

mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export
umount /data

So lets go over what we just did.

  • We have now moved the nfs folder from /var/lib to /data.
  • We created a symbolic link from /var/lib/nfs to /data/nfs since the operating system is still going to look for /var/lib/nfs when nfs is running.
  • We created an export directory in /data to store all the actual data that we are going to use for our nfs share.
  • Finally, we un-mounted the /data partition since we finished what we were doing.

Do the following ONLY on nfs2(10.10.10.2):
Since we moved the nfs folder to /data, that was synced over to the secondary node as well. We just need to create the symbolic link so that when the /data partition is mounted on nfs2 we have a link to the nfs data.

rm -rf /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs

So we removed the nfs folder and created a symbolic link from /var/lib/nfs to /data/nfs. The symbolic link will be broken since the /data parition is not mounted. Don’t worry about that because in the event of a failover that partiton will be mounted and everything will be fine.

Now we need to configure heartbeat on both nfs servers ns1 and nfs2. we have already installed required softwares.
Create /etc/ha.d/ha.cf on both nfs servers with following contents in it

keepalive 2
deadtime 30
bcast eth0
node ukibinfs01 ukibinfs02

Replace names for “node” as per your hostnames. to find out your hostname use uname -n.

Now we need to create “/etc/ha.d/haresources” configuration file on both nfs servers with following configuration

nfs1 IPaddr::10.10.10.3/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfslock nfs
IP address used in haresources config file is floating IP. Out of both NFS server which ever is primary, will have that IP configured on eth0:0.

Quick How To on Logical Volume Manager (LVM)

How to create / mange partitions using logical-volume-manager (LVM).

In computer storage, logical volume management or LVM provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions into larger virtual ones that administrators can re-size or move, potentially without interrupting system use.
Volume management represents just one of many forms of storage virtualization; its implementation takes place in a layer in the device-driver stack of an OS (as opposed to within storage devices or in a network).
Important terms :
PV : Physical Voume : hard disks, harddisk partitions or LUN (Logical Unit Numbers).
VG : Volume Group : One physical volume or multiple physical volumes can make one volume group.
LV : Logical Volume : one volume group or multiple volume groups can make one logical volume.

Scenario 1 : You have some free space (unallocated) on a hard drive and you want to create different logical volumes after OS installation. Also, you have already created partitions using LVM during OS installation. Centos / RedHat and Fedora does create LVM by default during installation.
Please refer following screen shoot. I have used 3 commands. pvscan , vgscan , lvscan.

pvcreate, vgcreate, lvcreate, pvscan, vgscan, lvscan,

 

 

 

 

 

 

I am using centos 5.7 for this tutorial. By default system has created physical volume called /dev/vda2pvscan command is used to list current physical volumes in linux.
Then you can see that 1 volume group VolGroup00 is created.vgscan command is used to list current volume groups on your linux (GNU/Linux) system.
The actual partition we can mount is called logical volume. lvscan command is used to list actual partitions or logical disks.  /dev/VolGroup00/LogVol00 is one of the logical volume on my system.

Steps to create new logical volume:
e.g you have added a new disk to your server and it is listed as /dev/vdb block device under /dev directory on your system. vd is notation for virtual disk. it could be xvdb or xvdc or xvdd if you are using citrix virtualization. It could be /dev/sdb or /dev/sdc or /dev/sdd if the disk is physical disk or VMWare virtualization.

1. pvcreate /dev/vdb (To create physical volume from newly added disk). this step is mandatory even if you are planning to extend your existing volume group.
2. vgcreate <volume group name> /dev/sdb
E.g vgcreate volgroup100 /dev/sdb
3. lvcreate -L+100G -n <logical volume name> <volume group name in step 2>
E.g lvcreate -L+100G logvol100 volgroup100
4. now use lvscan command to list all logical volume. you will able to see newly added LV.
5. now we need to format newly created logical volume so that we can mount it on system.
mkfs.ext4 /dev/volgroup100/logvol100

Basic MySQL administration

CONTENTS:

About this Tutorial
MySQL Administration
Working with Tables
MySQL Commands vs. Postgres Commands

About this Tutorial
I’ll be adding to this tutorial in the coming weeks. If you have ideas of things you’d really like to see here, email me.
MySQL Administration
This section goes over the basics of maintaining a database rather than actual data. It includes creation of databases, creation of users, granting of privileges to users, changing passwords, and other activities from within the mysql CLI front end to MySQL.
Entering and Exiting the mysql Program

Your starting point for MySQL maintenance is the mysql program:

[nilesh@nilesh]$ mysql
ERROR 1045: Access denied for user: 'nilesh@localhost' (Using password: NO)
[nilesh@nilesh]$

OOPs! User nilesh is apparently password protected, so we must run mysql to query for a password, and then type in the password (which of course types invisibly)…

[nilesh@nilesh]$ mysql -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 41 to server version: 4.0.18

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> quit
Bye
[nilesh@nilesh]$

In the preceding, you ran mysql with the -p option so that it would query for the password, you typed in the password, and then you exited the mysql program by typing quit.

By the way, if the account has no password (this is sometimes an installation default), you would just press Enter when prompted for the password.

Perhaps you want to log in as root instead of nilesh. This would probably be the case if you wanted to add a database or any other high responsibility action. Here’s how you log in as root:

[nilesh@nilesh]$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 42 to server version: 4.0.18

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> quit
Bye
[nilesh@nilesh]$

The -u root tells mysql to log you in as root, and as previously discussed, the -p tells mysql to query for a password.

In mysql, when you see a prompt like this:

mysql>

you are in the mysql program. For the rest of this section, most examples will begin and end in mysql.

There are more than one root account, and you must password protect all of them. Read on…

mysql> use mysql
Database changed
mysql> select host, user, password from user;
+-----------------------+--------+------------------------------------------+
| host                  | user   | password                                 |
+-----------------------+--------+------------------------------------------+
| localhost             | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost.localdomain | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost             |        |                                          |
| localhost.localdomain |        |                                          |
| localhost             | nilesh | 195CABF93F868C84F7FB2CD44617E468487551B6 |
+-----------------------+--------+------------------------------------------+
5 rows in set (0.00 sec)

mysql>

As you can see, there’s a root account at localhost and another to localhost.localdomain. Both must be password protected. In reality, all accounts should be password protected.

Exploring Your MySQL Installation
From within mysql you can find quite a bit of information concerning your installation. As what user are you logged into mysql? What databases exist? What tables exist in a database? What is the structure of a table? What users exist?

Because different operations require different privileges, to save time we’ll perform all these actions logged into mysql as root.

Let’s start with finding out your username within mysql:

mysql> select user();
+----------------+
| user()         |
+----------------+
| root@localhost |
+----------------+
1 row in set (0.00 sec)

mysql>

Now let’s list all the users authorized to log into mysql:

mysql> use mysql
Database changed
mysql> select host, user, password from user;
+-----------------------+--------+------------------------------------------+
| host                  | user   | password                                 |
+-----------------------+--------+------------------------------------------+
| localhost             | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost.localdomain | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost             |        |                                          |
| localhost.localdomain |        |                                          |
| localhost             | nilesh | 195CABF93F868C84F7FB2CD44617E468487551B6 |
+-----------------------+--------+------------------------------------------+
5 rows in set (0.00 sec)
mysql>

Notice there are two root logons — one at localhost and one at localhost.localdomain. They can have different passwords, or one of them can even have no password. Therefore, YOU’D BETTER check to make sure they’re both password protected. DO NOT forget the password(s). If you lose all administrative passwords, you lose control of your mysql installation. It’s not like Linux where you can just stick in a Knoppix CD, mount the root directory, and erase the root password in /etc/passwd.
User Maintenance
Before discussing user maintenance, it’s necessary to understand what properties exist for each user. To do that, we’ll use mysql -e option to execute SQL commands and push the output to stdout. Watch this:

[nilesh@nilesh]$ mysql -u root -p -e “use mysql;describe user;” | cut -f 1
Enter password:
Field
Host
User
password
Select_priv
Insert_priv
Update_priv
Delete_priv
Create_priv
Drop_priv
Reload_priv
Shutdown_priv
Process_priv
File_priv
Grant_priv
References_priv
Index_priv
Alter_priv
Show_db_priv
Super_priv
Create_tmp_table_priv
Lock_tables_priv
Execute_priv
Repl_slave_priv
Repl_client_priv
ssl_type
ssl_cipher
x509_issuer
x509_subject
max_questions
max_updates
max_connections
[nilesh@nilesh]$

The word “Field” is a heading, not a piece of information about a user. After that, the first three fields are the user’s host, username and password. The password is encoded. If you hadn’t included the cut command, you’d have seen that the same username can exist in multiple hosts, even if both hosts refer to the same physical machine. That’s why it’s vital to MAKE SURE TO password protect ALL users. The next several fields after password are privileges that can be granted to the user, or not.

As will be discussed later, there’s a way to grant ALL privileges to a user. From a security point of view this is very dangerous, as ordinary looking users can be turned into backdoors. I’d suggest always granting and revoking specific privileges. Here is a list of the privileges that MySQL users can have, and what those privileges allow them to do:

USER FIELD
PRIVILEGE FUNCTION

Select_priv Select Ability to use the select command. In other words, ability to read.

Insert_priv Insert Ability to insert new data — insert a new row.

Update_priv Update Ability to change existing data — change contents of a row.

Delete_priv Delete Ability to delete rows of existing data.

Create_priv Create Ability to create a new table.

Drop_priv Drop Ability to drop a table.

Reload_priv Reload

Shutdown_priv Shutdown

Process_priv Process

File_priv File

Grant_priv Grant Ability to grant and revoke privileges to others.

References_priv References

Index_priv Index Ability to create new indexes or drop indexes.

Alter_priv Alter Ability to change the structure of a table.

Show_db_priv

Super_priv

Create_tmp_table_priv
Ability to create temporary tables.

Lock_tables_priv
Ability to lock tables.

Execute_priv
Repl_slave_priv
Repl_client_priv

The root user, or any user given sufficient privileges, can create new users with the grant command:

mysql> grant select on test2.* to myuid@localhost identified by 'mypassword';
Query OK, 0 rows affected (0.00 sec)

mysql> select host, user, password from user;
+-----------------------+--------+------------------------------------------+
| host                  | user   | password                                 |
+-----------------------+--------+------------------------------------------+
| localhost             | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost.localdomain | root   | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost             |        |                                          |
| localhost.localdomain |        |                                          |
| localhost             | nilesh | 195CABF93F868C84F7FB2CD44617E468487551B6 |
| localhost             | myuid  | 195CABF93F868C84F7FB2CD44617E468487551B6 |
+-----------------------+--------+------------------------------------------+
6 rows in set (0.00 sec)

mysql>

In the preceding, we grant one privilege, select, on every table in the test2 database (test2.*), to user myuid at host localhost (myuid@localhost), giving that user the password “mypassword”. We then query table user in the mysql database (mysql.user) in order to see whether user myuid has been created. Indeed he has.

Granting select privilege is insufficient for any user using any app that modies data. Let’s give myuid more privileges in the test2 database:

mysql> grant Insert, Update, Delete, Create on test2.* to myuid@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql>

Now user myuid can not only select, but can insert rows, update rows, delete rows, and create tables and databases. Conspicuously missing is the ability to drop tables and databases — that can be more dangerous, in the hands of a careless but not malicious user, than some of the other abilities.

Privileges granted by the grant command are not kept in the mysql.user table, but instead in the mysql.db table. You can see results by issuing this command from the operating system:

[nilesh@nilesh] mysql -u root -p -e 'select * from mysql.db where host="localhost" and user="myuid";' > temp.txt
Enter password:
[nilesh@nilesh]

The results look like this:

Host    Db    User    Select_priv    Insert_priv    Update_priv    Delete_priv    Create_priv    Drop_priv    Grant_priv    References_priv    Index_priv    Alter_priv    Create_tmp_table_priv    Lock_tables_priv
localhost    test2    myuid    Y    Y    Y    Y    Y    N    N    N    N    N    N    N

You can revoke privileges like this:

mysql> revoke Delete, Create on test2.* from myuid@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql>

If you redo the select on mysql.db, you’ll see that those two privileges are now marked N.

To actually delete a user, you use a SQL statement to delete him from the mysql.users table after revoking all his privileges.

When deleting the user from mysql.user, if you forget the where clause, or any of its tests, especially the test on column user, you will delete too many users — possibly all users, in which case you’ll have no way to operate the database. BE VERY CAREFUL!

Now that you understand the potential landmines, here’s how you delete a user:

mysql> revoke all on test2.* from myuid@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql> delete from mysql.user where user='myuid' and host='localhost';
Query OK, 1 row affected (0.00 sec)

mysql>

Exploring Your Databases
You can refer to a table in two ways: By name after connecting to its database with a use statement, and with the databasename.tablename syntax. The former is much more common in applications, but the latter is often used in database administration, especially when you must access the system database (mysql) in order to perform work on a different database.

The first item for exploration is to find all databases:

mysql> show databases;
+-------------------+
| Database          |
+-------------------+
| depot_development |
| depot_production  |
| depot_test        |
| mysql             |
| test              |
| test2             |
+-------------------+
6 rows in set (0.00 sec)

mysql>

In the preceding, you went into the mysql program, determined what databases existed. Now let’s explore the test database:

mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| dogs           |
| people         |
+----------------+
2 rows in set (0.00 sec)

mysql>

So the database test has two tables, dogs and people. Let’s examine the columns in each of those tables:

mysql> show columns from dogs;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| owner | varchar(20) |      | PRI |         |       |
| name  | varchar(20) |      | PRI |         |       |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> show columns from people;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| lname | varchar(20) |      | PRI |         |       |
| fname | varchar(20) |      |     |         |       |
| mname | varchar(16) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
3 rows in set (0.00 sec)

mysql>

Another way to get the same information is with the describe command:

mysql> describe dogs;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| owner | varchar(20) |      | PRI |         |       |
| name  | varchar(20) |      | PRI |         |       |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> describe people;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| lname | varchar(20) |      | PRI |         |       |
| fname | varchar(20) |      |     |         |       |
| mname | varchar(16) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
3 rows in set (0.00 sec)

mysql>

Working with Tables
The following script creates and loads the table. Explanations appear to the right of the code:

####################################################
# CREATE THE DATABASE
####################################################
#drop database if exists test2;
#create database test2;
#use test2;

####################################################
# REINITIALIZE THE TABLES. ASSUME USER IS ALREADY
# CONNECTED TO THE PROPER DATABASE FROM WITHIN
# mysql OR psql.
####################################################
drop table if exists members;
drop table if exists families;

####################################################
# CREATE THE TWO TABLES FORMING A 1 TO MANY RELATIONSHIP.
# FAMILIES IS THE ONE, AND MEMBERS IS THE MANY.
# CREATE UNIQUE INDEX SUCH THAT FAMILY_ID PLUS NAME IN
# MEMBERS IS FORCED TO BE UNIQUE.
####################################################
create table families (
id int not null auto_increment,
name varchar(20) not null,
primary key (id)
);
show tables;

create table members (
id int not null auto_increment,
family_id int not null,
name varchar(16) not null,
primary key (id),
foreign key (family_id) references families
on delete restrict
on update cascade
);

create unique index familymember on members (family_id, name);

describe families;
describe members;

####################################################
# LOAD families WITH THREE ROWS
####################################################
insert into families (name) values (‘Albertson’);
insert into families (name) values (‘Becker’);
insert into families (name) values (‘Cintez’);

####################################################
# LOAD members WITH THREE ROWS FOR THE ‘Albertson’
# FAMILY. USE MONOLITHIC SQL STATEMENTS TO ACCOMPLISH
# THIS.
####################################################
insert into members (family_id, name)
select families.id, ‘Alvin’ from families
where families.name = ‘Albertson’;
insert into members (family_id, name)
select families.id, ‘Andrea’ from families
where families.name = ‘Albertson’;
insert into members (family_id, name)
select families.id, ‘Arthur’ from families
where families.name = ‘Albertson’;

####################################################
# LOAD members WITH THREE ROWS FOR Becker and Cintez
# FAMILY. INSTEAD OF MONOLITHIC SQL STATEMENTS,
# LOOK UP families.id FROM families.name, AND THEN
# USE THAT id TO INSERT THE MEMBERS.
# SETTING @id TO NULL PREVENTS USAGE OF PREVIOUS VALUES WHEN
# THE SELECT”S WHERE CLAUSE FAILS
####################################################

select @id:=null;
select @id:=id from families where name=’Becker’;
insert into members (family_id, name) values(@id, ‘Betty’);
insert into members (family_id, name) values(@id, ‘Ben’);
insert into members (family_id, name) values(@id, ‘Bob’);

select @id:=null;
select @id:=id from families where name=’Cintez’;
insert into members (family_id, name) values(@id, ‘Charles’);
insert into members (family_id, name) values(@id, ‘Christina’);
insert into members (family_id, name) values(@id, ‘Cindy’);

####################################################
# SHOW EACH FAMILY AND ITS MEMBERS
####################################################
select families.id, families.name, members.name
from families, members where
(members.family_id = families.id);

Use this only if creating
a new database, otherwise
leave commented out

Drop the tables if they exist
to make room for new tables
of the same name

The families table is the 1 of
1 to many.

The members table is the many
of the 1 to many.
members.family_id matches
families.id.

Foreign key means you can’t
delete a family that
still has members

This index prevent two family
members from having the same
first name

Show the structures of the two
tables just created

Load the families table.

Monolithic insert from select

Monolithic insert from select

Monolithic insert from select

The following inserts are
performed more procedurally,
by first finding families.id
based on families.name, and
then using that id as
members.family_id.

Prevent ghosts of selects past
Find id from families.name
Do the insert

Prevent ghosts of selects past
Find id from families.name
Do the insert

Join the tables in the
where clause, and find
all family members

The preceding code exercised the creation of databases and tables, insertion of rows, and viewing a one to many relationship with a join created in a where clause. The preceding is pretty much DBMS independent, and runs on both MySQL and Postgres. However, certain legitimate SQL queries don’t work on some versions of MySQL:

MySQL Commands vs. Postgres Commands

 

TASK
MySQL
Postgres
DISCUSSION
Comments in SQL code # as first printable on line Anything between /* and */ According to my experimentation, MySQL comments produce an error in Postgres, and vice versa, which is too bad.
Connect to database use dbname; \c dbname
Find all databases show databases; \l Postgres command is a lower case L, not numeral 1. It errors out if  you end it with a semicolon. It can be used in a script file.
Find all tables in current database show tables; \dt \dt shows only user level tables. Use \dS to see system level tables.
Show structure of one table describe tblname; \d tblname Postgres version doesn’t show length of varchar.
Change database use dbname; \c dbname
Add user grant select on dbname.* to username@userhost identified by ‘userpassword’; create user username with encrypted password=’userpassword’; WARNING: I currently am having problems getting created Postgres users to be able to authenticate.
Grant privileges grant privlist on dbname.* to username@userhost identified by ‘userpassword’; grant privlist on tbllist to slitt; In Postgres, tbllist must be an explicit, comma delimited list of tables. No wildcards.
Change user password grant select  on dbname.* to username@userhost identified by “userpassword”; alter user username encrypted password ‘userpassword’; WARNING: I currently am having problems getting created Postgres users to be able to authenticate.

How-to | Welcome to Linux how-to

CONTENTS:

About this Tutorial
MySQL Administration
Working with Tables
MySQL Commands vs. Postgres Commands

About this Tutorial
I’ll be adding to this tutorial in the coming weeks. If you have ideas of things you’d really like to see here, email me.
MySQL Administration Continue reading