Weiter Zurück [Inhalt] Online Suche im Handbuch LITTLE-IDIOT NETWORKING

22.8 Installation von Servern mit CHROOT

Besonders wichtig ist es stets, WWW-Server, SQL Datenbanken, FTP-Server... so abzusichern, daß ein Angreifer im Falle eines Einbruchs möglichst wenig Rechte erhält, und in einem Unterverzeichnis gefangengehalten wird. Es ist selbstverständlich, daß die Dämonen mit möglichst wenig Rechten gestartet werden. Der Start eines Dämons in einer CHROOT() Umgebung sollte eigendlich selbstverständlich sein. Viele kommerzielle Server, wie z.B. der Netscape Enterprise Server, ROXEN Challenger und viele FTP Dämomen unterstützen diese Funktionalität von Hause aus, sind aber in vielen Fällen nicht automatisch aktiviert. Schauen Sie bitte zuerst im Handbuch nach, bevor Sie dieser Installationsanleitung folgen. Sie ist allgemeingültig auf alle Dämomen unter UNIX anwendbar und erhöht die Sicherheit eines Server enorm, ohne sich negativ auf die Performance auszuwirken. Sie sollten, nachdem Sie diese Übung hier absolviert haben, alle Serverdämomen unter UNIX so absichern.

Wie man einen WWW-Server unter UNIX hinter einer Firewall mit Hilfe eines CHROOT() Skriptes absichert, wird in diesem Abschnitt detailliert beschrieben. Man beachte auf das Kapitel CHROOT(). Das Original findet sich auf http://hoohoo.ncsa.uiuc.edu/docs/tutorials/chroot-example.html, und ist von Denise Deatrich vom CERN, Schweiz verfasst worden. Sie ist in englisch verfaßt und leicht verständlich.

The following is a post from last year about how someone went about creating
a chroot web server. Though he used CERN's http, it applies equally well to
most web servers. 
From: deatrich@hpopc1.cern.ch (Denice Deatrich)
Subject: how to chroot your web tree -- an example

(this posting is a bit long; but might be useful to people who want a
detailed example of chroot-ing a web tree) 

Earlier this year I chroot-ed our web tree, and I'm REALLY glad I did. Our
web site fulfills many functions, and grows like mad. Various people
contribute to the tree, and they will try almost _anything_, even people who
you thought knew little about unix... 

Why do this? Well, it suffices to read comp.security.unix, or
comp.infosystems.www.authoring.cgi to understand why you should be aware of
possible security pitfalls in serving a web tree. So why not take extra
precautions to protect your server? 'chroot'ing an application definitely
limits the byte-space that an application can roam. It will NOT solve all
problems, but at least it will contain things. Holy smokes! There is so much
Internet-mania right now, and there are so many uninformed people jumping on
the bandwagon... So if you are a system administrator than you should (try
to) stay one step ahead of them all... 

There is an extra benefit in chroot-ing a web tree: we can move our web tree
anywhere, anytime if a disk dies (especially if you have a 'spare' host that
can suddenly 'assume' your web-hosts identity when your boot volume dies).
This might be important if you cannot live without your tree. Don't laugh
--if all of your colleagues' documentation lives there, then, well, you
can't live without it. Sometimes documentation really IS important. 

Before you start you have to decide if this is a do-able task. If your
entire tree can live on one file system, then this may be for you. But if
links and cgi-Skripts reach out across filesystems and nodes and people's
home directories (in this 'automounted' or 'afs-ed' world), then this
probably isn't your cup of tea. You have to know your web tree really well
first. In particular, take a close look at your cgi-Skripts and all Skripts
and utilities called by your cgi-Skripts before you start. 

We use the CERN http daemon, and our web site is served by an HP running
9.05 HP-UX and NIS (but it is not an NIS server). This information is
necessarily HP-specific, but it should generalize. It took me a couple of
afternoons of work to produce a working web tree. 

So these are the steps I followed to chroot a LIVE web tree. It wasn't as
painful as I thought it would be, but it requires a bit of work if you want
to provide a high level of functionality. 

In the following steps I have assumed:
the web tree owner is:                               www
living in group:                                     webgroup
I have also assumed that the new web root is at:     /wtree

Create a tree in a NEW web root and give it the appropriate ownership. If
only one account edits files in the web tree, then your are set. If multiple
user accounts update files, then presumably you could have a special group
for web updating, and have people 'newgrp' to this group. Thus: 
       chown -R  www:webgroup /wtree
       chmod -R 755 /wtree   (or 775 if 'webgroup' needs write permission)

You might also choose to create some kind of a 'home' directory structure
(see http://hoohoo.ncsa.uiuc.edu/docs/tutorials/chroot.html) 
**From this point you work as user 'www' 

Create the skeleton tree in the new web root. You will probably need:
bin, etc, tmp, dev, lib 

You have to decide whether you are going to put sharable libraries in your
web tree. I decided to NOT do this (though in the end I put one library
there). If I was to do this all over again, I probably wouldn't have opted
for only staticly-linked versions. At the time I was worried about
'duplicating' my file system in the web tree. 
If you decide to put sharable libraries in the tree, then you have to figure
out which ones. This might not be easy! Anyway, you should copy a useful set
of utilities to your /wtree/bin directory, and copy any necessary libraries
to /wtree/lib or /wtree/usr/lib. 

Note: the 'useful set of utilities' is necessary if you use cgi Skripts in
your web tree. Therefore which ones you need will depend on which utilities
are referenced by your cgi-Skripts. 

If you do as I did and opt for staticly-linked versions, then the easiest
thing is to get a bunch of GNU file utilities and compile them staticly, so
that you don't need shared libraries. These utilities are available in these
GNU files sets:
(bash, binutils, diffutils, find, gawk, grep, sed, textutils) 

Only install what you will use; for example, don't install 'df' unless you
want to try to provide it with the 'mount table'. This is an example set of
GNU utilities: 

    bash      cat       cksum     comm      cp        csplit    cut
    du        expand    find      fmt       fold      gawk
    grep      head      join      ln        locate    ls        mkdir
    mv        nl        od        paste     pr        rm        rmdir
    sort      split     sum       tac       tail      touch     tr
    unexpand  uniq      wc        xargs

Copy all of these files into /wtree/bin 
I also compiled a staticly-linked version of perl (version 5). This took a
few iterations, mostly because I dislike the 'Configure' Skript. So I
installed perl in /wtree/bin/ and the Perl libraries in

In addition, 'date' and 'file' are useful. So I copied the HP versions of
them, and took the shared library and dynamic loader that I needed for them.
Thus on an HP system you need to copy /lib/libc.sl and /lib/dld.sl into
/wtree/lib/ For 'file' you also need 'magic', which you should put in

It is also useful to create a symbolic link from bash to 'sh' and from gawk
to 'awk' in /wtree/bin. Note: pretending that bash is 'sh' is quite
functional; however on HP-UX the 'system()' C-function wants /bin/posix/sh.
Trying to fool it with a link to bash won't work (I was compiling 'glimpse'
for our web tree, and it uses lots of inane system() calls. So I was forced
to copy /bin/posix/sh into /wtree/bin/posix/) 

PLEASE NOTE: place COPIES of files in the web tree, do not use hard links!
Otherwise, why are you bothering to chroot the tree? Anyway, the web tree
should be able to live on any disk... hard links can't! 

Make the /wtree/dev/null device file 

Copy any needed networking files into /wtree/etc; the following should do
from your host's /etc/ tree. By all means, make these files as minimal as
        resolv.conf       ## the DNS resolver file
    and maybe:
        nsswitch.conf     ## Naming Server fall-over file; useful with NIS

Now go and compile the daemon 'httpd' staticly. Also make staticly- linked
versions of cgiparse and cgiutils. Copy all of these into /wtree/bin/ Make
any additional directory structure that you will need in your web tree; for
    (or just copy these from your existing web  tree)

And of course create a directory for your cgi-bin tree, using whatever name
you have specified in the http configuration file. Copy your prepared
configuration file 'httpd.conf' into /wtree/etc/ (or whatever sub-directory
you have designated for this purpose). Also prepare and copy any other httpd
files that you will need; for example, 'passwd', 'group', 'protection' (and
copy an appropriate .www_acl file into these directories as well). 

Make a chroot wrapper for your daemon, compile and install it, and update
whatever Skript will be launching it from boot up. For example, if I call my
wrapper 'httpd' and install it in /usr/local/bin, then from /etc/inittab the
entry looks something like: 
        blah:run_level:once:/usr/local/bin/httpd /wtree >>/tmp/httpd.log

An example wrapper follows. The 'uMsg()' calls are just home-brewed function
calls that output error messages. Substitute your own error messages: 
/** wrapper BEGINS **/
#include <stdio.h>
#include <unistd.h>
#include "uUtil.h"  /* for uMsg() */

void    main( int argc, char *argv[] )
  uid_t uid  = your_web_user_uid_here;
  gid_t gid  = your_web_user_gid_here;
  int   ierr = 1;
  char  *p;

  if( argc != 2 )
    fprintf( stderr, "USAGE: %s WEB_ROOT\n", argv[0] );
    fprintf( stderr, "WHERE: WEB_ROOT - is the root of the web tree\n" );
    p = argv[1];
    if( chdir(p) )
       uMsg( U_FATAL, "chdir to %s failed: %S", p );
    else if( chroot(p) )
       uMsg( U_FATAL, "chroot to %s failed: %S", p );
    else if( setuid(uid) != 0 )
       uMsg( U_FATAL, "setuid failed: %S" );
    else if( setgid(gid) != 0 )
       uMsg( U_FATAL, "setuid failed: %S" );
      execl( "/bin/httpd","httpd",(char *)0 );
      uMsg( U_FATAL, "execl failed for httpd: %S" );
  exit( ierr );
/** wrapper ENDS **/

Now you have to install your existing html files into your new tree. If
people have been using relative pathnames in their html files, then there
won't be many difficulties. In my case I just copied all necessary trees
into the new location (using korn-shell syntax): 
        cd /old_web_tree
        for i in dir1 dir2 dir3 dir4 blahblahblah ; do
          cp -r $i  /wtree/$i

You will have to correct any html files that have full pathnames in their
links. You will also have to correct any cgi-Skripts or shell Skripts that
have incorrect pathnames in them;
for example: #!/usr/local/bin/perl 

Now go around and put out fires. 
It is possible to write C-utilities that will remote-shell to a trusting
host [using getservbyname() and rcmd()] to get some time-critical
information that you want to have accessible from your web tree. (Well,
people use web-trees for all kinds of purposes). The utility can screen
options to ensure that only 'safe' requests are sent to the trusting host.
This avoids the necessity of keeping a small UNIX passwd file in your web
tree (but requires a small services file in /wtree/etc/ if you aren't
running NIS). 
It is useful to make a shell wrapper that you can use to debug Skript
probems in your web tree. Using exactly the same wrapper as above,
substitute the following in the execl() function: 
        execl( "/bin/bash","bash", (char *)0 );

Note that it has to be setuid root. 
For example, if you call this chroot-ed shell: cr_shell, then on your web
host, you can launch a chroot-ed shell to test Skripts (but do it in a
sub-shell so that you don't destroy your environment): 

     $ (export PATH=/bin; export HOME=/; /my/path/name/to/cr_shell /wtree )

Weiter Zurück [Inhalt] Online Suche im Handbuch LITTLE-IDIOT NETWORKING