Installing Suse Linux Enterprise Server

Introduction

The installation procedure for Suse Linux Enterprise Server, along with most GNU/Linux Distributions, allows you to setup the software for multiple purposes. Unlike other operating systems which simply "dump" a basic system during install, SLES lets you fine tune the system during install. This allows for a more streamlined and custom installation right from the start. To help you in planning for your SLES installation, here is a rundown of what to expect during install.

Installation Overview

SLES Installation Overview Screen

Disk Configuration - The first item to seriously look into when installing SLES is the disk subsystem. Here you are presented with many, many choices. The important thing here is to remember that this decision can be difficult to change once the installation is completed. For instance, it can be extremely hard to re-configure where your data is stored.

Another important issue with the disk subsystem is to ensure that you maintain data integrity in case of failure. This means that you should look into either investing into a hardware RAID solution, or utilize the excellent Linux Software RAID solution included with SLES.

Software Selection - The software selection procedure is not as important as the disk subsystem as this can usually be changed after installation is completed. You will, however, want to investigate whether or not you want to implement an LDAP Solution on your server (covered later in this chapter).

When you select the software to be installed, you should consider who is going to maintain the server. If it is a GNU/Linux expert, you may or may not want to install the X.org Windowing system. If it is someone with moderate to little GNU/Linux experience you should definitely install a complete Desktop Environment to help them along. One thing to note is that you should probably not have the X.org Windowing System to automatically start on bootup. You do this by setting the "Default Runlevel" to 3 instead of 5.

User Authentication Sources - Once the installation procedure installs all the software you select, it will then restart the computer and re-launch the setup procedure to allow you to finish configuring the software for your environment. Most of these settings simply configure the server hardware, but the one you should really look at is the "User Authentication Sources".

The User Authentication Sources page allows you to specify how the server will authenticate the users to gain access to it's resources. This could be actually logging into the server or just accessing files that are stored on the server. The sources can be a local file on the server, an LDAP Server (local or remote), a Windows Server, or even a NIS Server or an eDirectory Server. If this server is simply a standalone server that will only preform a specific task on the network (such as DNS, DHCP or Web Server), a local source is enough. However if you are going to be providing files over the network through NFS or Samba, it is wise to utilize another server, or setup a local LDAP Server to ensure that the User and Group databases are conformed across the network.

Configuring the Disk Subsystem (Partitioning)

The most important step during the installation procedure is to ensure that you configure the disk subsystem correctly. It can be very hard, to nearly impossible (without a total re-install) to reconfigure some aspects of the disk subsystem. So, a total understanding of what can be (or needs to be) configured for your server is imperative.

For instance, a few questions that you must ask yourself:

  • Do I need to provide data redundancy across multiple drives to prevent losing data when a hard drive crashes ?
  • Do I need to provide more storage than what a single disk drive can hold ?
  • Will I need to have an easy way to add more storage to the server for future needs ?
  • Do I need to provide a way to take a snapshot of the system at any given time ?
  • Is our configuration so unique and changes so often do I need to have an advanced tool to help maintain our disk subsystem ?

Expert Partitioner within SLES Install

Expert Partitioner within the SLES install

Normally on most Distributions, you must manually configure all of the tools needed to properly configure the server's disk subsystem, this can be a huge overhead. Thankfully, the YaST installation procedure (and the post-install Yast Modules) provides an easy way to configure and maintain all aspects of your disk subsystem, including Linux RAID, Logical Volume Management and the Enterprise Volume Management System (as well as other advanced systems, including iSCSI).

RAID Arrays

The first part of the disk subsystem that we will look at is implementing some sort of RAID (Redundant Array of Inexpensive Disks) Array, instead of using single disk drives. In a nutshell, a RAID array spreads data across two or more physical disk drives. This gives many benefits such as:

  • Increased storage space by having a storage system that can be much larger that what a single hard disk drive can provide
  • Increased speed by reading data from multiple disks
  • Data redundancy by configuring the RAID array to write the same data to multiple disks to reduce the risk of data loss if one drive stops working

RAID Arrays can be configured in various ways, these are referred to as RAID Levels. The most useful RAID levels in use are RAID Level 0, 1 and 5.

RAID 0 - With this RAID Level, the member drive's capacities are all added together to create a large storage space. The read and write speeds are increased since the data is read from and written across multiple drives, but there is no redundancy to this RAID level. If one of the drives fails, the data is lost.

RAID 1 - This RAID level is also known as "disk mirroring". The data is "mirrored" on two or more drives in this array. This RAID level provides fault tolerance, but only allows for the storage space of the smallest drive in the array. Read speed can be greatly increased with this RAID level, but write speed (at best) is only as good as the write speed of a single drive.

With RAID 1, you can add a spare drive that will act as what is called a "Hot Spare", which will be enabled only when another drive in the array fails.

RAID 5 - This RAID level is probably the most used. Here you have three or more drives that act similarly to RAID level 0, except that additional "parity information" is written across the drives, thus if one drive fails the data is not lost. In addition to providing Fault Tolerance, both read and write performance usually increase. The storage size of a RAID 5 array increases with the number of drives added ( Number of drives - 1 * storage size of the drives).

With RAID 5, you can also add a spare drive that will act as what is called a "Hot Spare", which will be enabled only when another drive in the array fails.

Along with these RAID levels, you can actually "combine" different RAID levels to suit your needs. For instance RAID 1+0 (sometimes referred to RAID 10) combines two RAID 1 arrays into a larger RAID 0 array. This way utilizes the fault tolerance feature of RAID 1, but still gain the benefits of RAID 0 (increased performance and storage). Similarly, RAID 5+0 (or RAID 50) combines the features of RAID 5 with RAID 0.

Hardware vs Software RAID

Along with deciding which RAID level to deploy on your servers, you also have the decision on whether or not to use a Hardware based RAID solution or use the Software RAID solution that is available on Suse Linux Enterprise Server. This decision is hardly ever cut and dry, both solutions have their benefits and shortcomings, and neither solution is the best at every situation.

With that said, there is usually one situation that a Hardware RAID Solution will always be better than a software RAID solution, and that is if you are going to implement a RAID level 1 (mirroring) solution. This is due to the fact that a software RAID solution will have to write the data to multiple drives, which could saturate the bus. With a hardware solution, your data only has to go across the bus once and the controller takes care of writing the redundant data to every drive in the array.

However, there is one thing I want to stress here: Not all "RAID Adapters" are true "Hardware RAID Solutions". A "Hardware RAID controller" is an adapter that has the "electronics" on-board that takes care of handling the processing of the RAID Array. Usually these controllers also have separate cache memory and some even have a power backup device (such as a battery) to ensure data that has not been written to the disks survives a power failure.

Unfortunately, most RAID controllers available today do not have the electronics to handle the processing of the RAID array. Instead they rely on software stored in it's BIOS, or even within the it's "Device Drivers" for it's RAID functionality. These controllers offer no real benefit over the software RAID available with Suse Linux Enterprise, and most will actually perform worse. Also, many of the companies that create these devices do not create GNU/Linux device drivers, or if they do, they are usually distributed as binaries that are usually not updated regularly - this results in your in-ability to update or create a custom built Linux Kernel for your server.

Creating a Software RAID Array

The biggest difference between setting up a Software RAID solution as opposed to a Hardware RAID solution is the fact that a Hardware RAID adapter will present the Array as a single drive to the Operating System. A Software RAID solution must have all of it's drives partitioned and configured within the Operating System. Thankfully, Suse Linux Enterprise Server provides excellent tools to create and maintain a Software RAID solution.

The first step in creating a Software RAID Array is to create partitions that will be part of the array. This is a little different than a Hardware RAID solution, which uses entire drives for it's arrays. The process of creating these partitions is similar to creating any partition, except you do not format the partition, instead you "identify" the partition as being "0xFD Linux RAID".

In the example I am giving here (see image below), I will create a single partition on every drive that completely fills each drive. In a real system, it is common practice to separate out different RAID partitions on each drive. For instance, most systems that deploy a Software RAID solution will have a separately mounted /boot filesystem that could very well be on it's own Software RAID Array. The SWAP partition could also be created on a similar Software Array.

Creating Linux RAID Partitions

Selecting the RAID Type

Creating Linux RAID Partitions and selecting the Software RAID Type

Once you create all of the Linux RAID Partitions you need, the next step is to actually create the Software RAID Array. To do this, you click on the RAID drop-down button, then select "Create RAID". The installation procedure (or the Yast Partition Module if you are doing this after installation) will then start the RAID Wizard.

The first step in the RAID Wizard is to select the type of RAID level you want to implement. (See the previous section for information on RAID levels, but remember that RAID level 0 does not provide any data redundancy and if a drive fails in a RAID 0 Array the data in that array would be lost.) After you select the RAID level, the next step is where you select which "Linux RAID" partitions that you want to include within this RAID array.

Adding Linux RAID Partitions to the RAID Array

Initializing the RAID Array

Adding "Linux RAID" Partitions to the RAID Array and Initializing the Array

The next step is where you actually create the array. Here is where you have a few choices to make. The first of which is to select the format. If you are going to use this RAID array with LVM (Logical Volume Management - which is covered later), then you would select "Do not format" and remove the "Mount Point". If you are not going to use LVM, you would simply select format type and set the mount point of the array.

The next decision you have is what "Chunk Size" to set for the array. The chunk size is actually the size of the smallest "chunk" of data that you will write to a particular disk. For instance if you have a file of 64kb and the chunk size of the array is 8kb, there will be eight 8kb "chunks" written across the array.

The value of the Chunk Size is very dependent on not only the RAID type, but also on the filesystem used on the RAID array. There is not a 100% exact perfect chunk size for every hardware setup and situation, but there are good guidelines to start from. A good rule of thumb is to start with 32kb chunks sizes for RAID 0 arrays and 128kb for RAID 5 arrays. With RAID 1 arrays, the chunk size really only pertains to how data is read from the array, so a good start would probably be 4kb. For RAID arrays that you are going to use as Swap, 64kb is the best. Remember these are only starting points, to fine tune your setup you really should test different sizes to find what is optimal for your configuration.

Once everything is configured, click Finish to create the array.

Maintaining Software RAID Array

Once you implement a software RAID array, it pretty much takes care of itself as long as the drives involved in the array stay functional. So the burden becomes: how do you know when a drive is not functional, how do you replace a drive that no longer works and how do you plan for a drive failure. Just remember that the following procedures will only work with arrays that have data redundancy (these will have no effect on RAID Level 0 arrays).

Monitoring RAID Arrays

First let us look at how to query the status of a Software RAID array. There are a few ways to do this. For instance, since GNU/Linux treats everything as a file, you can simply "cat /proc/mdstat" to see the status of the RAID arrays on the system. Also included with Suse Linux Enterprise Server (as well as most GNU/Linux distributions) is the "mdadm" utility. This utility lets you query and maintain your software RAID arrays. For instance in the following image, I ran a detailed query of a Software RAID array using:

	mdadm --detail /dev/md1

Now, of course you do not want to have to remember to look at all of your software RAID arrays periodically to ensure they are operating without a drive failure. The mdadm program can also be ran as a daemon to send alerts (usually by email) to you if the status of a software RAID array changes in any way.

To enable this service you will want to edit the "/etc/sysconfig/mdadm" file to ensure that the email address is correct (you can utilize the Yast "/etc/sysconfig" editor module to accomplish this). You will also want to make sure that you configure the "Mail Transfer Agent" Yast Module if your server is not a full fledged email server to ensure that emails will be sent correctly. Once you edit the appropriate file and ensure that the system will mail alerts correctly, to start the service you run:

	/etc/init.d/mdadmd start

and to ensure it runs when the computer reboots, issue the command:

	chkconfig mdadmd on

Viewing the status of a Software RAID Array

Viewing the status of a software RAID Array

Replacing a Malfunctioning Drive

If you do have a hard drive failure, the array will become in a "degraded" state. This means that you no longer have "data redundancy" and if another drive fails you will lose the data stored within the array. To prevent this from happening you must replace the drive that has failed and add the new drive to the array.

The first thing that must be done to a new drive in order for it to be added to the array is that you must partition it with "Linux RAID" partitions. This procedure is the same used to configure the Software RAID array. These partitions must be at least the same size as those contained on the failed drive.

Once you have the new drive partitioned, you then simply need to add the "Linux RAID" partitions into the appropriate software RAID array using the mdadm utility. For instance:

	mdadm -a /dev/md2 /dev/sdb3

This command will add the "/dev/sdb3" partition into the "/dev/md2" RAID array. Once the partition is added to the array, the array will start it's recovery procedure. This procedure may take a while depending upon the hardware involved with the array. Unfortunately, some individuals complain that this takes way too long (days in some accounts). If you are careful in choosing the right hardware for the software RAID array, this delay can be mitigated. For instance, during testing I was using 36GB Western Digital Raptors with a SATA interface, the average recovery time from a failure was about 18 minutes (or 500MB per minute) - not too shabby.

If you want to test your hardware, or if you absolutely need to remove a drive that you believe may become faulty in the near future, you can force the RAID array to mark it as failed. To do this simply issue the following command:

	mdadm --manage --set-faulty /dev/md0 /dev/sdb1

Adding a "Hot Spare" Drive

If the data residing on the RAID array is extremely important, or if you are installing a server that will be a large distance away, you can add what is called a "Hot Spare" to the array. What a hot spare will do is nothing... until a failure occurs. If you have a hot spare within your array and a failure occurs, immediately the RAID array will start the recovery procedure using the hot spare. This can be a lifesaver if your server is 200 miles away or if a replacement drive is not readily available.

The procedure for adding a hot spare to the array is basically the same procedure used to replace the drive (without removing an existing drive of course). First you will want to partition the drive with "Linux RAID" partitions, then you will want to add the partitions to the appropriate array. If the array is not in a "degraded" state when you add the partition to it, the new partition will be added as a hot spare.

Logical Volume Management

Normally when you create disk "volumes", you partition a hard drive, the drive's "partition table" gets written/updated, then you format the partitions so you can write data to the disk. Once the partitions are created you cannot "resize" or "move" the partitions without having special software utilities (and probably a few reboots).

What if you want to be able to "resize" or "move" a volume or maybe even add a drive to a volume to increase the storage capacity ? This is exactly why LVM exists today, to do these things. Basically LVM moves the partitioning of volumes from the physical drive to the Operating System, this allows much more flexibility and freedom in configuring your storage solution.

With LVM you are able to "resize" partitions without having to restart the computer, you are able to add drives to an existing LVM volume, you can even take a "snapshot" of an existing volume for backup purposes (or other reasons). However, it is also important to state that LVM does not implement any sort of data redundancy, but you can use RAID arrays with LVM to gain the benefits of both.

Creating LVM Logical Volumes

Preparing your system for LVM is similar to preparing your system for Software RAID, you must have a "physical base" to place your LVM Volume Group upon. This "physical base" can be a Software RAID array, or it can be a partition on a single drive (or a hardware RAID array).

In this example, I am going to create a "physical base" out of single hard drives. Normally you would not want to do this, but for teaching purposes it is probably the most effective. A more realistic example will be shown in the "Putting it all Together" section.

Creating LVM Volumes

Adding LVM Volumes to Volume Groups

Creating LVM Volumes and adding them to a LVM Volume Group

As with a Software RAID array, the first thing you need to do is to create "Linux LVM" partitions on your hard drive(s). These partitions will become the physical base for your LVM Volume Group. These partitions can be any size as they will simply be "added together" to form your storage solution.

Once you create your "Linux LVM" partitions, you will now need to add them to a "Volume Group". Normally you will only have one "Volume Group" on a server, but it is possible to create multiple Volume Groups (although most situations do not justify it). This "Volume Group" then grows in size when you add partitions to it.

The easiest way to explain a Volume Group is to almost equate this to a physical drive. Once you create this "Volume Group", you will now have to "partition" areas of it in order for you to store data upon it. You do this by adding "Logical Volumes" to the "Volume Group".

Adding Logical Volumes with a Group

Adding Logical Volumes with a Group

The process of creating "Logical Volumes" is similar to creating physical partitions. You must specify a filesystem and a "Mount Point", and with "Logical Volumes" you must also specify a "volume name" that will be used to mount the volume and perform other functions. A common procedure is to name the logical volume after where it will be mounted, for instance a logical volume that will be mounted to /home should simply be called "home".

There are a few things to keep in mind as you create these logical volumes. First you need to be aware of the fact that the ext2/ext3 filesystems will not let you resize the filesystem while it is currently mounted (this functionality should be added in the near future). For now, if you want to grow the filesystem while it is mounted, you must format the logical volume as reiserfs or xfs. Also note that xfs will not allow you to shrink a partition at all and reiserfs will only allow you to shrink a partition when it is not mounted.

Working with Logical Volume Management

LVM brings many features to the table, the two most popular and most used are the ability to resize volumes and the ability to create a snapshot of a volume.

Resizing a LVM Volume

There are 2 ways to resize a LVM Volume with Suse Linux Enterprise Server. Probably the easiest way is to simply use the Yast LVM Module. When you launch the LVM module you are presented with a screen that shows all the logical volumes on your system. Simply click on one of them and select "Edit". This will bring up a screen that will allow you to adjust the size of the volume, simply grow the volume to whatever size you want and click OK, then click on Finish within the Yast LVM Module to apply the settings. The module will take care of growing the volume and adjusting the filesystem for you.

The Yast LVM Module

Editing a Logical Volume

The Yast LVM Module and editing a Logical Volume

The second way to resize a volume is to manually do it within a terminal. Fortunately this is not much harder than using the Yast LVM Module. In fact you may prefer it. There are two steps in manually resizing a logical volume, first you resize the volume, then you must ensure the filesystem makes the appropriate changes.

To do this the manual way, the command you issue to actually change the size of the volume is the "lvextend" command, such as these examples where the first command will increase the volume to 25GB and the second command simply adds 10GB the volume.

		lvextend -L25G /dev/system/home
	lvextend -L+10G /dev/system/home

Once you resize the volume, the next thing you have to do is to resize the filesystem to accommodate the change in volume size. The command you run to do this varies depending upon the filesystem. For instance:

ext2/ext3 filesystems

	umount /home
	resize2fs /dev/system/home
	mount /dev/system/home /home

reiserfs filesystems

	resize_reiserfs -f /dev/system/home

xfs filesystems

	xfs_growfs /home

Creating Snapshots

Another feature that LVM allows is the ability to create "snapshots" of your volumes. A snapshot is basically a "clone" of another volume at the time you create the snapshot. The interesting thing with these snapshots is the fact that they can be very small in size, since they only hold the changes made to the original volume after you create the snapshot.

These snapshots are very useful when doing backups (this is their primary purpose). However, now that virtualization has started to become feasible to deploy, snapshots will probably play an important role in virtualization solutions.

To actually create a snapshot, you use the lvcreate command with the -s switch, for instance, to do a backup of the /var volume one could do:

First create the snapshot

	lvcreate -L1G -s -n varbackup /dev/system/var

Now mount the snapshot

	mkdir /mnt/var_snapshot
	mount /dev/system/varbackup /mnt/var_snapshot

Run a backup command, for instance: (of course you would use a better command for the backup)

	tar czf varbackup.tgz /mnt/var_snapshot

Now unmount and remove the snapshot

	umount /mnt/var_snapshot
	lvremove /dev/system/varbackup

Enterprise Volume Management System (EVMS)

For some installations, you may require more control over the disk subsystem than what is provided by the previously mentioned tools. To accommodate these installations, a utility called EVMS was created. When working with EVMS, the first thing to realize is the fact that the Enterprise Volume Management System (EVMS) is not a single item related to storage, such as Software RAID or Logical Volume Management, it is instead a "framework" created to help manage all of the aspects of the system's storage.

This framework includes ways to configure Software Raid and LVM Volume Groups, as well as ways to do basic disk tasks.

Creating an EVMS Container

Adding Volumes to an EVMS Container

Creating an EVMS Container and adding Volumes to it

I would venture to say that most servers would probably have little benefit from using EVMS, and sometimes it may be a hindrance since it does require a bit of knowledge to configure and maintain the storage subsystem using EVMS. However, if your organization or server uses technology such as Multipath or SANs, etc., you may in fact benefit greatly using EVMS, as this is why it was created. Also if you are looking to Virtualize your servers/services, EVMS would be something you should definitely look into.

Using the EVMS Administration Utility

Using the EVMS Administration Utility

With that being said, the configuration of EVMS is very similar to LVM. You have to create LVM Partitions (or Software RAID arrays), then you would go through the Yast EVMS Wizard to create an EVMS container. Once the container is created you would add your LVM Partitions or your Software RAID arrays to the container. After you have a physical base (the EVMS container) established you can then add volumes to the container to hold data (very similar to LVM).

However, once you are finished with the installation, this is where the similarities to configuring a simple LVM solution end. Instead of using simple commands to grow volumes or creating snapshots, you are given a full fledged utility to accomplish all of the tasks related to maintaining your disk subsystem. Not only can you manage all aspects of the LVM portion of your disk subsystem, you can now fine tune your Multipath Segments, your Software RAID arrays, etc. from a single utility.

Fully configuring and maintaining your EVMS solution could fill a whole book, so if you are interested in micro-managing your disk subsystem you should definitely do further research on EVMS.

Putting it all together

I covered quite a bit of information on the options available when configuring the disk subsystem of your server. So, to finish this section off, I am quickly going to step through a "typical" setup for a basic file server. This setup will use software RAID arrays along with LVM to ensure the server will maintain data integrity, as well as have the ability to be re-configurable to handle any needs for future data storage. Remember this is only a recommended baseline system, your setup may require something entirely different.

The first thing you must do is to partition all of the drives within your computer. Normally when I use a Software RAID array I always create a 300MB Linux RAID partition on all the drives, which will be added to a RAID Level 1 array (mirrored array). This 300MB array will become the "/boot" mount which holds the current kernel and initial ram disk for the system. You can probably reduce this to 100MB, but I use 300MB on the off-chance that the system may be used as a Xen Virtual Machine base in the future.

Expert Partitioner within SLES Install

Example Disk Setup for Standard Servers

Next, you will want to create a way to handle swap space for the server. I usually handle this similar to how I handle the "/boot" mount. I simply add a 2GB partition to every drive within the array, these partitions are again added to a RAID level 1 array (mirrored array), which is then specified as swap space. When you create the RAID array, it is highly recommended to set the Chunk size to 64KB since this size is optimal for swap space.

Since we have the "/boot" mount and the swap taken care of we are going to focus on the "/" root mount. There are two schools of thought here: One is to create a separate RAID level 5 array for the "/" root mount, then create additional arrays (or an LVM setup) for additional mounts. The other idea is to create a single RAID level 5 array, which would then be added to an LVM setup where you would create a "/" root logical volume. This second school of thought gives you the most flexibility, however you must ensure that the LVM code is included into the initial ram disk for your system. SLES does include LVM into the initrd, but if you are going to build your own kernel it is highly recommended that you do not include the "/" root mount within an LVM Volume Group (unless you compile the RAID and LVM code into your custom kernel).

So, if you are going to build your own Linux kernel, or if you feel uneasy about using a logical volume as the "/" root mount, go ahead and create a separate RAID level 5 array to hold the "/" root mount. This mount will probably only need to be 10GB or so (as long as you create separate mounts for "/home", "/srv", "/var", etc.) depending upon the function(s) of your server.

The remaining drive space should then be partitioned into a RAID level 5 array, which is then going to be configured as a LVM volume group. To do this, go ahead and create the RAID array as you normally would, however when it comes to the final step in creating the array, ensure that you select "do not format" and ensure that there is no mount point selected. This ensures that the array will be available to be added to a LVM volume group.

Once you create the RAID level 5 array, go ahead and launch the LVM wizard and add this array to the "system" volume group. Once the volume group is created, you can now create logical volumes for the important mount points for your server. The mount points to consider are:

  • /home - which holds all of your user's data
  • /srv - which is usually used to store your network exports, your websites, ftp access, etc.
  • /var - which holds your imap email storage, samba profiles, dns chroot, log files, etc.

By putting these mount points within an LVM volume group, your system is now flexible enough to grow with your server needs. Each mount point will then be able to be resized as needed and if you run out of storage space, you can easily add another RAID level 5 array to the volume group for more storage without having to repartition or move your data around.

Software Selection

Unlike the disk subsystem configuration, Software selection during install is not as important since it can be easily changed after the installation is completed. However, there are a few things to point out and a few suggestions to make.

To use LDAP or not - Probably the most important choice to make when selecting the software to install is to decide whether or not to install and use LDAP on your server. This is important during install because if you do check it, the installation procedure will setup the LDAP server for you. An LDAP server can be very difficult to setup manually and Suse Linux Enterprise Server has quite a few services that can interact with the LDAP server. I highly recommend that if you wish to utilize an LDAP server, select it during install and have the installation procedure configure it for you.

To determine whether or not you want to use LDAP on your server here are a few situations that will require the LDAP server:

  • If you want to use SLES's integrated Mail Server you must use LDAP
  • If you want a "Secondary Domain Controller" for your Windows clients you must use LDAP
  • If you want an easy way to have a centralized login server for GNU/Linux workstations and servers you will probably want to use LDAP
  • If you don't want to manually create the "mapping" between Windows usernames to Unix usernames you will probably want to use LDAP (see the Samba Chapter for more information)

Selecting Software to be Installed

Detailed View of the Software Available

Selecting Software to be Installed and the Detailed View of the Software Available

To install X Window System and a Graphical Environment - The question of installing the X Windows System (along with a Graphical Environment) on a server has been debated for quite a while. On one hand, the installing of the X Windows System does open a few security windows (pun intended) that you must ensure that aren't breeched, but on the other hand it does provide an extremely nice interface to help maintain the server.

This question basically comes down to this - Are the Administrators of this server knowledgeable enough to maintain the server without any basic graphical interface ? Most of the time, since you are using Suse Linux Enterprise Server, you will definitely want to install the X Window System (along with a Graphical Environment) simply to have access to the Graphical Yast Front End.

What about Xen Virtualization - Suse Linux Enterprise Server is probably one of the best bases for Xen virtualization deployments. However, when you decide to go the Virtualization route you must keep in mind that you want the base Operating System to be as lean as possible. This means that you do not want *any* services at all on the "physical" server except Xen. If the base Operating System goes down, all of the virtualized systems go down with it. The whole concept of deploying virtualized servers is very different than deploying a standard server, which is what this document covers.

Network Configuration

When configuring networking with Suse Linux Enterprise Server, most of the options are pretty straight forward and for most configurations you should be able to easily configure your network adapters. However, there are two things I do want to cover: The physical network hardware to use in a network, and the process of Network Card Bonding (using 2 or more NICs as one).

Network Hardware

This problem really has nothing to do with deploying Suse Linux Enterprise Server, but I have found this problem repeatedly when working on networks that I have never seen before. The worst part of this problem is that nearly every network that I have seen having this problem already has the hardware readily available to optimize their network - to fix it simply takes a few seconds.

The problem I am talking about is the incorrect usage of Gigabit ports on network switches. Currently, most people apparently reserve these ports for their own machines, or someone "higher up in the food chain" of the company. The problem with this is that the switches themselves are becoming the bottlenecks of network throughput, as shown in the following charts. The difference between these two performance results is simply changing the client switches to be connected to the server switch (already a gigabit switch) through it's gigabit port.

Net Throughput Difference between Switches

Net Reponse Time between Switches

Network Throughput and Response Time between Different Speed Switches

So if you are concerned with the performance of you network, the easiest, most cost effective solution is to simply purchase a gigabit switch to connect all of your servers to, then connect all of the other switches to the gigabit switch through their gigabit port (if there is one available). This simple solution can easily speed up most networks regardless of it's size.

Network Card Bonding

Network Card Bonding (or port trunking) is the process of combining multiple network ports or adapters into a single "interface" to your network. This process has multiple benefits ranging from providing a backup network path in case one goes down to increasing the total server speed by combining the speeds of all the adapters within the "Bond".

To configure network bonds within Suse Linux Enterprise Server you utilize the same Yast Module that handles basic network configuration. Although it is not readily apparent within the Yast Module, network bonding is actually quite easy to configure and can be done either during the Installation process or later on using Yast.

The first step to take when bonding network adapters is to set the network adapter's "Device Activation" to "Never" so the adapter won't be enabled during boot until the bond "adapter" is enabled. This is found under the "General" tab of the Network Address Setup page (which is launched when you "Edit" the network adapter).

Yast Network Module

Disabling the Network Adapter's Device Activation

The Yast Network Module and Disabling the Network Adapter's Device Activation

Along with disabling the device activation of the network card, you must also ensure that it is set to not have an IP address. You do this by setting the card to "None Address Setup", which can be found under the "Address" tab of the Network Address Setup page.

After each adapter that you want to add to the bond is prepped with the above steps, you can now create the bond. To do this go back to the "Network Card Configuration Overview" page (where all of the adapters are listed) and click on "Add". This opens the "Manual Network Card Configuration" wizard, where you will set the type of card to "bond", then click next.

You should now be at the "Network Address Setup" page for the bond interface you just created. This page is basically the same as a normal network interface with the addition of two options. The first additional option is the ability to select which interfaces you want to include in the bond interface. If you do not have a list of adapters, ensure that you "prepped" them correctly following the steps listed above.

Setting the Adapter to None Address Setup

Creating the Network Bond Configuration

Setting the Adapter to "None Address Setup" and Creating the Network Bond Configuration

The second additional option presented is the ability to select the bonding driver options. This is where you specify what you want the bonding device to accomplish when using multiple adapters.

  • mode=balance-rr - Round-robin policy - Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
  • mode=active-backup - Active-backup policy - Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.
  • mode=broadcast - Broadcast policy - transmits everything on all slave interfaces. This mode provides fault tolerance.
  • mode=802.3ad - IEEE 802.3ad - Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad dynamic link aggregation.
  • mode=balance-tlb - Adaptive transmit load balancing - channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
  • mode=balance-alb -Adaptive load balancing - includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

Configuring the Bond Network Adapter

Final Look at the Yast Network Module

Configuring the Bond Network Adapter and a Final Look at the Yast Network Module

The first four modes are probably the most used, the last three do require Ethtool support within the network card driver in order to work correctly.

LDAP Server Configuration

One of the most difficult, yet useful services to configure on GNU/Linux and Unix machines is the LDAP Server. Using an LDAP Server you can maintain a network wide database that contains all sorts of information regarding your network. For instance, it is widely used for User Authentication, Phone & Address storage, as well as storage for other services such as Samba and the Named (DNS) Server.

Fortunately, the Suse Linux Enterprise Server installation routine will configure the LDAP Server for you. All you have to do is ensure that you select the "Directory Server (LDAP)" pattern during the software selection portion of the routine. Then during the "Service" screen, it will automatically create the certificates required for LDAP to function properly, as well as ensure it starts upon boot-up.

Creating the Certificates and Starting the Server

Using the LDAP Server for Authentication

Creating the Certifiate & Starting the Server and Using LDAP for Authentication

The installation procedure will also allow you to configure the LDAP Client information to allow the server to authenticate users using the LDAP Server instead of utilizing simple text files.

Maintaining LDAP Databases

After your server is completely configured, many services may depend upon the data located within your LDAP Server so it is of utmost importance that this data is backed up and archived. To do this, many network admins configure "Slave LDAP" servers on their network specifically for this purpose. I believe this may be overkill in many situations and for these installations a simple backup is preferred. However, if you have a larger network installation or have a situation where the LDAP server must be running 24/7 (although a backup only shuts down the LDAP server for a few seconds), then you should deploy a Slave LDAP server for redundancy and for backups.

To back up the LDAP database you simply run the following command while the service is NOT running.

	slapcat > backup.ldif

This, of course, is hardly the answer since it relies on the fact that you must manually do this, so it is probably better to deploy a script that will backup your LDAP database automatically. The script that I usually use is as follows (create this in /etc/cron.daily so it will run on a daily basis).


#!/bin/bash
BACKUPDIR=/srv/ldap_backup
KEEPDAYS=180
FILENAME=$BACKUPDIR/ldap.backup.$(date +%Y%m%d)
# Create the directory
mkdir -p $BACKUPDIR
chmod 0700 $BACKUPDIR
# Stop the LDAP Server
rcldap stop
sleep 15
# Create a new backup
/usr/sbin/slapcat | gzip --best >${FILENAME}.new.gz
mv -f ${FILENAME}.new.gz ${FILENAME}.gz
# Start the LDAP Server
rcldap start
sleep 15
# Delete old copies
OLD=$(find $BACKUPDIR/ -ctime +$KEEPDAYS -and -name 'ldap.backup.*')
[ -n "$OLD" ] && rm -f $OLD

Now that you have backups of your LDAP data, it is fitting that I show you how to restore this data if you ever should need to. You must first stop the LDAP Server and move the current database to a different location (in case you need it again). Then you unzip one of the LDAP backups that you wish to use and "slapadd" it into a new database. Finally, ensure that the correct user "owns" the new database file and restart the LDAP server. So, an example would be:


rcldap stop
mkdir /root/ldap_old
mv /var/lib/ldap/* /root/ldap_old
gunzip /srv/ldap_backup/ldap.backup.20080121.gz
slapadd -l ldap.backup.20080121
chown ldap.ldap /var/lib/ldap/*
rcldap start

Just remember that these LDAP backups are simple text files and you do not need to restore the entire database to simply restore specific "entries". For instance, if you happen to "accidentally" delete a Samba machine account within the database, you do not need to restore the entire database. Simply extract a backup, open it with a text editor and find the data that was deleted. Copy that data into a new text file, save it, then "slapadd" it into your current database (something similar to the slapadd command above).

This process can be very time consuming if you simply wish to change some entries in the LDAP server. Fortunately there are tools available to adjust the data within the LDAP Server without having to create text files and "slapadd" that data into the Server. In fact, Suse Linux Enterprise Server 10 (with Service Pack 1) now includes a utility to do just this called the "LDAP Browser Yast Module".

Using the Yast LDAP Browser

Using the GQ LDAP Client

Using the Yast LDAP Browser and the GQ LDAP Client

Using the LDAP Browser Yast module is quite easy. It will ask you for your password when it starts up, then you simply highlight the Entry you need to change, go to the "Entry Data" tab and adjust the data you need to change (don't forget to hit save when finished). This module is excellent for changing personal information of your Users, or changing the Samba ID for certain groups, etc. The only thing it currently will not do is allow you to delete entire entries. To do this, you must use another LDAP Editor. The one I use (mainly to delete stale Samba computer accounts in the LDAP tree) is called GQ. The home page for this client is at http://gq-project.org/.

Google Ad

© 2017 Mike Petersen - All Rights Reserved