RJ Systems
Linux System Administration
Home Tech Linux Links Consulting







Valid XHTML 1.0!

Valid CSS!

IPv6 test

Kerberos-OpenAFS slave

Introduction

Even a single Kerberos-OpenAFS server has many advantages over, e.g. NFS, but care should also be taken to offset the risk of a single point of failure. For this reason, AFS cells usually include more than one server and an MIT Kerberos V master Key Distribution Center (KDC) can support one or more slave KDCs, which are read-only copies of the master. They offer the same functionality as their kin, except that the Kerberos slave KDC cannot perform administrative tasks.

This example builds on a previous one in which a Kerberos-OpenAFS master server was installed on a host, called kas1.example.com, running Debian 5.0 (lenny). If followed properly, this step-by-step process should produce an new slave KDC and additional AFS cell server. The system relies heavily on timestamps, so reasonably accurate time synchronization among all participating hosts is essential.

Before the actual installation process for Kerberos and OpenAFS can begin, it will first be necessary to install Debian lenny on a new host called kas2.example.com. The new host must have one extra free disk partition (/dev/hdb1 will be used here) and also a DNS server must be available on the network with a zone file to which forward and reverse mappings can be added for this host. After the initial installation of the operating system, make sure these packages are installed as well:

~# apt-get install ssh ntp ntpdate xinetd nmap

After installing them, edit /etc/ntp.conf so that the new host synchronizes to a common NTP server (preferably a local one) and edit /etc/default/ntpdate to do the same. Now the installation process for the MIT Kerberos V slave KDC, followed by the additional OpenAFS server, can begin:


1. Kerberos client install

First, run the following command to test if the MIT Kerberos V server installed previously is available on the network:

~# nmap kas1.example.com

This should be among the results:

PORT    STATE SERVICE
749/tcp open  kerberos-adm
754/tcp open  krb_prop

If there is a problem, fix it first. If not, continue by installing this package:

~# apt-get install krb5-user

A total of three packages are installed as a result, including two dependencies:

krb5-config  1.22                      Configuration files for Kerberos Version V
krb5-user    1.6.dfsg.4~beta1-5lenny1  Basic programs to authenticate using MIT Kerberos
libkadm55    1.6.dfsg.4~beta1-5lenny1  MIT Kerberos administration runtime libraries

During the installation, a few questions are asked regarding the krb5-config package that should be answered as follows:

Kerberos servers for your realm: kas1.example.com
Administrative server for your Kerberos realm: kas.example.com

2. Host princ & keytab

Still on kas2, use kadmin to login to the administration server (with password xanthina) and create a new principal for this host, as well as a local keytab file:

~# kadmin -p admin
Authenticating as principal admin with password.
Password for admin@EXAMPLE.COM: xanthina
kadmin:  addprinc -randkey host/kas2.example.com
WARNING: no policy specified for host/kas2.example.com@EXAMPLE.COM; 
defaulting to no policy
Principal "host/kas2.example.com@EXAMPLE.COM" created.
kadmin:  ktadd host/kas2.example.com
Entry for principal host/kas2.example.com with kvno 3, encryption type 
AES-256 CTS mode with 96-bit SHA-1 HMAC added to keytab 
WRFILE:/etc/krb5.keytab.
Entry for principal host/kas2.example.com with kvno 3, encryption type 
ArcFour with HMAC/md5 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/kas2.example.com with kvno 3, encryption type 
Triple DES cbc mode with HMAC/sha1 added to keytab 
WRFILE:/etc/krb5.keytab.
Entry for principal host/kas2.example.com with kvno 3, encryption type 
DES cbc mode with CRC-32 added to keytab WRFILE:/etc/krb5.keytab.
kadmin:  q
~# _

The -randkey switch is used because a machine cannot enter a password. By default, Kerberos saves its keys in /etc/krb5.keytab, so when kadmin is run from the target host to create those keys, the keytab file will automatically be saved in the right place. Use the klist -ke command to list the keys in the local keytab file.


3. Kerberos server install

On kas2, install KDC services for the slave system:

~# apt-get install krb5-kdc

In this case, it is also the only package that is installed:

krb5-kdc     1.6.dfsg.4~beta1-5lenny1  MIT Kerberos key server (KDC)

Following this automated configuration sequence for the package, two familiar problems appear: first, a comment is displayed to inform xinetd users that manual conversion to xinetd format is necessary for a hashed-out kpropd entry that has been added to /etc/inetd.conf. This is for the Kerberos V database propagation daemon, which is required on KDC slave servers. Fix it now by simply creating a file called /etc/xinetd.d/krb_prop with the following contents:

service krb_prop
{
        disable         = no
        socket_type     = stream
        protocol        = tcp
        user            = root
        wait            = no
        server          = /usr/sbin/kpropd
}

After saving this file, restart xinetd:

~# /etc/init.d/xinetd restart

Issue the following command to display the system's open TCP and UDP ports:

~# nmap kas2.example.com

What is important now is that kpropd is available, so the results should include:

PORT    STATE         SERVICE
754/tcp open          krb_prop

The second problem that occurs after installing the KDC is that it will not start, but this is because the database file for it (/var/lib/krb5kdc/principal) does not yet exist. This issue will be addressed later on.


4. Propagation ACL

On both kas1 and kas2, create a file called /etc/krb5kdc/kpropd.acl and add to it the following lines:

host/kas1.example.com@EXAMPLE.COM
host/kas2.example.com@EXAMPLE.COM

This file must contain a list of all host principals of the KDCs that are included in the realm. Therefore, kas1 is included. In this way, the master and all of the slaves will each have a complete list of all the KDCs in the realm.


5. Database propagation

On the new host, kas2, there is as yet no KDC database. A new one must be created, but not with a new realm being created at the same time, such as with the krb5_newrealm command that was used on kas1 earlier. Instead, create an empty database with this command:

~# kdb5_util create
Loading random data
Initializing database '/var/lib/krb5kdc/principal' for realm 
'EXAMPLE.COM', master key name 'K/M@EXAMPLE.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: ammodytes
Re-enter KDC database master key to verify: ammodytes
~# _

Indeed, the same KDC database master key password that was used for kas1ammodytes − is used here as well. This is to avoid a bug. It should be possible to perform the next two commands without it, but that currently results in an error. Perhaps with Debian squeeze it will no longer be necessary.

Edit /etc/krb5kdc/kdc.conf and modify two options in the [realms] section regarding ticket lifetime to match the same values used on the master KDC:

max_life = 1d 0h 0m 0s
max_renewable_life = 90d 0h 0m 0s

Now on kas1, create a dump of the KDC database:

root@kas1:~# kdb5_util dump /var/lib/krb5kdc/slave_datatrans

Then, still on kas1, use the kprop command, which will look for the above dump file, to propagate the database to kas2:

root@kas1:~# kprop kas2.example.com
Database propagation to kas2.example.com: SUCCEEDED
root@kas1:~# _

Back on kas2, create a stash file for the KDC database master key:

~# kdb5_util stash
kdb5_util: Cannot find/read stored master key while reading master key
kdb5_util: Warning: proceeding without master key
Enter KDC database master key: ammodytes
~# _

The password − ammodytes − is saved in the /etc/krb5kdc/stash file.

It is now possible to start the KDC slave server on kas2.example.com:

~# /etc/init.d/krb5-kdc start

Issue the following command to display a list of all open TCP and UDP ports on the system:

~# nmap -sT -sU kas2.example.com.

Note the trailing dot! Among the results should be:

PORT     STATE         SERVICE
88/udp   open|filtered kerberos-sec

This means the kerberos-sec (an alias for kerberos) service is now available.


6. Realm config file

Edit the Kerberos realm configuration file, /etc/krb5.conf. This file is initially created by the Debian installer and contains information about the realms of a number of famous institutions, but none of that is necessary in this case. Instead, replace its contents with this:

[libdefaults]
        default_realm = EXAMPLE.COM
        forwardable = true
        proxiable = true

[realms]
        EXAMPLE.COM = {
                kdc = kas2.example.com
                admin_server = kas.example.com
        }

[domain_realm]
        .example.com = EXAMPLE.COM
        example.com = EXAMPLE.COM

[logging]
        kdc = FILE:/var/log/krb5/kdc.log
        default = FILE:/var/log/krb5/kdc.log

See this section for a more detailed explanation of this file. The line "kdc = kas2.example.com should then also be added to this configuration file on all the other hosts that are part of the same realm, or else they will not be able to make use of this server.

After /etc/krb5.conf has been saved, create the Kerberos log directory:

~# mkdir /var/log/krb5

To prevent the log file from growing too large, create a logrotate configuration file. Edit /etc/logrotate.d/krb5-kdc and give it the following contents:

/var/log/krb5/kdc.log {
	daily
	missingok
	rotate 7
	compress
	delaycompress
	notifempty
	postrotate
		/etc/init.d/krb5-kdc restart > /dev/null
	endscript
}

7. Propagation script

The master KDC, kas1.example.com, must regularly push its database out to the slaves to maintain synchronization. One way to do this is to create a script on kas1, called /etc/cron.hourly/krb5-prop, to perform the previously described database dump and propagation tasks on a regular basis. An example of such a script can be found in Garman (2003) on page 65. Here is a modified version of it that uses the example.com domain name and the Debian directory structure:

#!/bin/sh

# Distribute KDC database to slave servers
# Created by Jason Garman for use with MIT Kerberos 5
# Modified by Jaap Winius, RJ Systems

slavekdcs=kas2.example.com

/usr/sbin/kdb5_util dump /var/lib/krb5kdc/slave_datatrans
error=$?php

if [ $error -ne 0 ]; then

	echo "Kerberos database dump failed"
	echo "with exit code $error. Exciting."
	exit 1
fi

for kdc in $slavekdcs; do

	/usr/sbin/kprop $kdc > /dev/null
	error=$?php

	if [ $error -ne 0 ]; then

		echo "Propagation of database to host $kdc"
		echo "failed with exit code $error."
	fi
done

exit 0

The main difference, however, between this version and the original, is that this script does not return anything in case of success, which is more in keeping with general Unix philosophy.

Additional slave hostnames, separated by spaces and enclosed between double quotes, can be added to this script by modifying the value of the slavekdcs variable, for instance like this:

slavekdcs="kas2.example.com kas3.example.com"

Do not forget to make the script executable:

~# chmod 755 /etc/cron.hourly/krb5-prop

8. Kinit test

Using some standard tools, there are a number of ways to test that the new KDC is working. Start by requesting a valid ticket for the new principal, admin:

~# kinit admin
Password for admin@EXAMPLE.COM: xanthina
~# _

Once given the right password (xanthina), a list of the tickets obtained can be viewed:

~# klist -5
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin@EXAMPLE.COM

Valid starting     Expires            Service principal
11/04/09 00:43:20  11/05/09 00:43:20  krbtgt/EXAMPLE.COM@EXAMPLE.COM
~# _

9. Debconf reconfig

From this point on, a more detailed level of questioning will be required from debconf. To achieve this, run the following command:

~# dpkg-reconfigure debconf

Answer the questions as follows:

Interface to use: Dialog
Ignore questions with a priority less than: low

10. AFS kernel module

The objective here is to build and install the OpenAFS kernel module from source. However, since this host is physically identical to the previously installed machine, kas1, save time by copying the already compiled package from that system to this one:

~# scp kas1:/usr/src/openafs-modules*.deb /usr/src/

Following that, install the package:

~# dpkg -i /usr/src/openafs-modules*.deb

After it has been built and installed, test the OpenAFS kernel module by loading it:

~# modprobe openafs

Again, this it what it looks like when the module has been loaded:

~# lsmod |grep afs
openafs               473948  0 
~#

11. OpenAFS client install

Install the OpenAFS client. Now install these two packages:

~# apt-get install openafs-{client,krb5}

Only these two packages are installed as a result:

openafs-client                1.4.7.dfsg1-6+lenny2                 AFS distributed filesystem client support
openafs-krb5                  1.4.7.dfsg1-6+lenny2                 AFS distributed filesystem Kerberos 5 integration

Following the installation process, debconf will ask a few questions regarding the openafs-client package. Answer them as follows:

AFS cell this workstation belongs to: example.com
Size of AFS cache in kB: 50000
Run Openafs client now and at boot? No
Look up AFS cells in DNS? Yes
Encrypt authenticated traffic with AFS fileserver? No
Dynamically generate the contents of /afs? Yes
Use fakestat to avoid hangs when listing /afs? Yes
DB server host names for your home cell: kas1 kas2

Regarding the AFS cache, the default size is about 50 MB and is located in the /var/cache/openafs/ directory. Often, the cache is increased to around 512 MB, but usually less than 1 GB; larger cache sizes may lengthen the startup time, as all files within the cache must be prechecked with the servers. It is vital that OpenAFS is never in danger of running out of cache space, since it is not designed to handle such situations gracefully. A requirement is also that an ext2 or ext3 file system is used for the cache directory; a file containing such a file system, or a dedicated partition can be used for this purpose, but its even possible to use a memory-based cache, which offers signifigant performance benefits.


12. OpenAFS server install

To install the OpenAFS server, start by copying some vital information from kas1:

~# scp -r kas1:/etc/openafs/server /etc/openafs
root@kas1's password: 
CellServDB                                   100%   38     0.0KB/s   00:00
CellServDB.old                               100%   38     0.0KB/s   00:00
KeyFile                                      100%  100     0.1KB/s   00:00
UserList                                     100%    6     0.0KB/s   00:00
ThisCell                                     100%    8     0.0KB/s   00:00
~# _

These files include the AFS KeyFile, which must be exactly the same on all of the AFS servers that belong to the same cell.

Now install the server packages:

~# apt-get install openafs-{fileserver,dbserver}

These are also the only two packages that are installed as a result:

openafs-dbserver              1.4.7.dfsg1-6+lenny2                 AFS distributed filesystem database server
openafs-fileserver            1.4.7.dfsg1-6+lenny2                 AFS distributed filesystem file server

One question must be answered for the openafs-fileserver package:

Cell this server serves files for: example.com

13. AFS partition

OpenAFS is usually set up to work with dedicated partitions of which each server can maintain up to 256. These partitions are associated with mount points just below the root that follow a particular naming convention, /vicepXX/, where XX can be any letter, or two-letter combination. In this exercise, a separate partition, /dev/hdb1, will be formatted with the ext3 file system and mounted at /vicepa/.

Actually, in cases where a separate partition is not available, it is also possible for OpenAFS to simply use a /vicepXX/ directory in the root partition. This is because OpenAFS does not require any particular low-level format for its partitions. AFS partitions can therefore be explored with ordinary UNIX tools, although the data stored therein is structured in a way that is only meaningful to OpenAFS.

Assuming a partition has already been created on the disk, format it with:

~# mkfs.ext3 /dev/hdb1
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 1048564 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
~# _

Then edit /etc/fstab and add this line to the end of the file:

/dev/hdb1       /vicepa         ext3    defaults        0       0

Now create the mount point and mount the new partition:

~# mkdir /vicepa ; mount /vicepa/
~# _

14. Adding kas2 to the cell

To add the new server to the existing AFS cell, a series of actions will have to be taken on both the new server and the existing server(s) (in this case only (kas1). Start by creating a file server instance on kas2:

kas2:~# bos create kas2.example.com fs fs \
	-cmd '/usr/lib/openafs/fileserver -p 23 -busyat 600 \
		-rxpck 400 -s 1200 -l 1200 -cb 65535 -b 240 \
		-vc 1200' \
	-cmd /usr/lib/openafs/volserver \
	-cmd /usr/lib/openafs/salvager \
	-localauth 
kas2:~# _

Note that the first -cmd, for fileserver, is between single quotes and may not contain any backslashes.

Confirm that this worked:

kas2:~# bos status localhost -noauth
Instance fs, currently running normally.
    Auxiliary status is: file server running.
kas2:~# _

Since kas2 is also meant to be an AFS database server, the next task is to ensure that all of the servers know this. Start on kas1:

kas1:~# bos addhost -server kas1.example.com \
	-host kas2.example.com -localauth
kas1:~# _

After that, the idea is to do the same for kas2, but first edit /etc/hosts on that host and comment out the entry for kas2:

#127.0.1.1      kas2.example.com    kas2

Now run the bos addhost command on kas2:

kas2:~# bos addhost -server kas2.example.com \
	-host kas2.example.com -localauth
kas2:~# _

Following that, restart the ptserver and vlserver instances on kas1:

kas1:~# bos restart -server kas1.example.com \
	-instance ptserver -localauth
kas1:~# bos restart -server kas1.example.com \
	-instance vlserver -localauth
kas1:~# _

After performing this operation, pause for a few moments to allow enough time for the voting process before moving on to any other existing servers. Then check the status on kas1:

kas1:~# bos status localhost -noauth
Instance ptserver, currently running normally.
Instance vlserver, currently running normally.
Instance fs, currently running normally.
    Auxiliary status is: file server running.
kas1:~# _

After the ptserver and vlserver instances have been restarted on kas1 and possibly any other existing servers, proceed to create them on the new server:

kas2:~# bos create -server kas2.example.com \
	-instance ptserver -type simple -cmd /usr/lib/openafs/ptserver \
	-localauth
kas2:~# bos create -server kas2.example.com \
	-instance vlserver -type simple -cmd /usr/lib/openafs/vlserver \
	-localauth
kas2:~# _

Check that both processes are up and running:

kas2:~# bos status localhost -noauth
Instance fs, currently running normally.
    Auxiliary status is: file server running.
Instance ptserver, currently running normally.
Instance vlserver, currently running normally.
kas2:~# _

15. AFS client & test

Now that the OpenAFS server is up and running, enable the OpenAFS client. It was installed much earlier, but configured to not start up automatically. Change that by editing /etc/openafs/afs.conf.client and changing the following line:

AFS_CLIENT=true

Now restart the client:

~# /etc/init.d/openafs-client restart
Stopping AFS services: afsd openafs.
Starting AFS services: openafs afsd.
afsd: All AFS daemons started.
~# _

After the AFS client has been started, check out the contents of the new AFS volume in the /afs/ directory:

~# ls /afs | head
1ts.org/
acm-csuf.org/
acm.uiuc.edu/
ams.cern.ch/
andrew.cmu.edu/
anl.gov/
asu.edu/
athena.mit.edu/
atlass01.physik.uni-bonn.de/
atlas.umich.edu/
~# ls /afs | wc -l
188
~# _

16. See also
17. Further reading
  • Eastlake D, Panitz A. 1999. RFC2606 − Reserved Top Level DNS Names. The Internet Society. HTML at the Internet FAQ Archives.
  • Kohl J, Neuman C. 1993. RFC1510 − The Kerberos Network Authentication Service (V5). HTML at the Internet FAQ Archives.
  • Wilkinson S. 2008. OpenAFS, FOSDEM 2008. Video (15:30 minutes) at YouTube.

18. Sources
  • Campbell R. 1998. Managing AFS: The Andrew File System. Prentice Hall. ISBN 0-13-802729-3. 479 pp.
  • Garman J. 2003. Kerberos, The Definitive Guide. O'Reilly & Associates, Inc. ISBN-13 978-0-596-00403-3. 253 pp.
  • Massachusetts Institute of Technology. 1985-2007. Kerberos V5 System Administrator's Guide. HTML at the Massachusetts Institute of Technology (MIT).
  • Milicchio F, Gehrke WA. 2007. Distributed Services with OpenAFS. Springer-Verlag. ISBN-13 978-3-540-36633-1. 395 pp.
  • Ocelic D. 2006-2010. Debian GNU: Setting up MIT Kerberos 5. HTML at Spinlock Solutions.
  • Ocelic D. 2006-2010. Debian GNU: Setting up OpenAFS 1.4.x. HTML at Spinlock Solutions.
  • OpenAFS. 2000-2009. Documentation. HTML at OpenAFS.


Last modified: 2017-08-02, 17:50

©2003-2020 RJ Systems. Permission is granted to copy, distribute and/or modify the
content of this page under the terms of the OpenContent License, version 1.0.