The Capibara Distributed URL Cloaking
Please go to the Sourceforge project page
the download-able project files, or for the latest CVS snapshot.
The c-duck project is a project that found its origins in the
capibara public url
cloaking service. This was the first free
http/1.1 based public url cloaking service, that has been operational from the mid nineties
until early 2001.
In the beginning of 2001 a problem occurred with the hosting of this
service, and no new parties were found that were both capable and willing
to take over this project.
In the process of finding an alternative solution is here where the
c-duck project started getting its shape.
The idea arose to make a generic distributed and only mildly hierarchic
version of a cloaking kit suitable for 'home' use, and thus
c-duck was born. Cduck is a distributed url cloaking kit designed
especially for 'home' style nodes.
http/1.1 + frames based cloaking
The main functionality of a url cloaking 'node' is realized with a
combination of usage of the http/1.1 protocol and the usage of a single
HTML frame. A simple http/1.1 server (or cgi on a regular http
server) looks at just the server requested, and from this (using a simple
database) constructs a small simple HTML document that basically just
consists of a frame pointing to some external long URL, and a simple
header with things like document title and meta keywords in it.
With this simple server (or cgi) it is possible to do url
cloaking for a large amount of server names with very little use of
resources. This makes it very good suitable for low bandwidth usage.
Reliable cloaking service on low q.o.s. servers/lines
The basic idea behind c-duck is the usage of the DNS system to accomplish
redundancy and high q.o.s. using low q.o.s. systems. The DNS system allows
a domain to have a number of authoritative name servers able to resolve all
DNS queries for a domain. historically this is only used for redundant
consistent DNS, but cduck uses it in a bit different way. cduck makes use
of the fact that both the DNS server and the http server are running on
the same box, and combines this with semi consistent DNS responses.
If a cduck DNS server gets a basic A record request, than this server will
typically response with a 'That's me' response. This way the
redundancy of the DNS system is extended to provide full redundancy for
the http part of the system. Combined with short DNS record TTL values
this way of working makes it possible to set up a reliable url cloaking
system with the use of only low q.o.s. servers or connections (like dsl or
cable lines with a static IP address).
Main resources outside of the cduck system
Your ISP most likely spends much resources in running a reliable
high q.o.s. web-server, although probably with little functionality
and ugly long url's, but in any-case high q.o.s..
As this q.o.s. is much higher than what could be reached on a
single home-PC on a *dsl line, it is a good idea to make use of
this for all master files. Next to this, you don't want to waste
to much of the limited upstream bandwidth of the *dsl line, so
any non-smart content should reside on your ISP's web-server.
Thus both web content and master files for the DNS and cloaking
subsystem are kept on the www server of your ISP. Next to q.o.s.
and bandwidth considerations, the lacking of this 'service'
from cduck should also be considered a security feature, as cduck
aims to give as little as possible services to the outside
world as reasonably possible, as to limit possible exploits that
could arise from unknown bugs in the implementation.
With cduck being designed for 'home' stile nodes comes the point that an
average home user is not as versed in security issues, as are professional
server administrators, and is not able to spend the same amount of resources
in system security that an ISP or a professional company with Internet presence can.
Thus the task of running a 'server' becomes
something rather risk full. Cduck addresses this issue by incorporating the
setup of many high security settings the different Unix systems offers as part
of its installation process. The basic philosophy most software implements
is 'code' security, and as you can see on any security mailing-list, almost
no server software can fully rely on this type of security, so cduck
assumes that even with the most care, the 'code' will likely have some
security issues concerned with it. Knowing this, and knowing that home
users will not give up their day job just to be sure to keep their home
system patched in time, cduck does not rely on just code security, but
relies on the concept of system and subsystem containment.
This means two things:
- If a cduck subsystem is broke, the containment will contain the
exploit-ability to the cduck system, and limited to the subset of 'rights'
given to the particular subsystem. The rest of your system will not be
in direct danger.
- If from a (sub)system container irregularities are detected by cducks
trivial intrusion detection system, than this system will effectively
pull the plug on the cduck system.
Capdns is a process responsible for the standard 'that's-me'
response of the DNS part of cduck. Next to the 'that's me
responses that it gives by default to any unresolved A record
request, capdns has a more extended subset of the DNS subsystem
implemented. This means that although capdns was mainly designed
as part of a distributed url cloaking system, the capdns subsystem
could also be used as a highly secure DNS server outside of this
caphttp and capdb
The caphttp process is responsible for the http protocol part of
the url cloaking. It is for efficiency and security reasons
combined with a simple database access program capdb, that is
responsible for accessing the cloaking database. The CVS tree currently
also contains a simple cgi script that can take over the task of
caphttp. This script can be useful if you want cduck to co-exist with an
already running web-server. (Note: Running a stand-alone web-server on a
cduck node will make its security level sub-optimal)
The capcron process is a process responsible for keeping the url
cloaking database, and DNS RR database synchronized with their
master files. As described earlier the master files are for q.o.s.
and security reasons kept off-site on low functionality web-servers
as provided by most ISP's to their customers.
Unix accounts and file-system (security)
cduck tries to add additional security by compartmenting the subsystems
(capdns,caphttp,capdb and capcron) with the use of
different user accounts for each subsystem combined with
strict Unix file-system rights. This way a higher containment level will be
reached of any bugs in any of the 4 subsystems, as the bug will
by this be contained (to some extend) to the buggy subsystem
instead of the whole cduck system.
Linux 2.4 Net-filter and BSD ipfw user based fire-walling (security)
The use of different user accounts for each subsystem is combined
with the advanced fire-walling possibilities of the Linux 2.4
kernel or the ipfw firewall for BSD. This firewall functionality
provides the possibility to implement
'user' based fire-walling. As every subsystem has its own user,
this means that every subsystem gets its own fire-walling rules
on all its network traffic. This extends the containment provided
by the Unix access rights greatly.
chrooted environment (security)
Next to the Unix acl's for containment of problems in the
subsystems, the complete cduck system uses an other form of
containment provided by Unix OS's, namely the chrooted
environment. This will greatly limit the access possibilities of
the cduck system as a whole to the file-system. Lacking access to
most of the file-system should add an extra layer of containment
to the cduck system as a whole.
This experimental part of cduck works outside of the main cduck
systems, and tries to monitor the 'outgoing' network traffic
coming from cduck. The installation script of cduck will place
some random strings in some key parts of the cduck system, and
some other random strings in key parts of your main system.
captids will try to monitor for the 'literal' occurrence of these
strings in any outgoing traffic from the http and DNS services.
Further captids monitors for syslog entries
generated by iptables on Linux or ipfw on BSD that are generated as a
result of attempts to initiate non standard
network traffic from any of the network containers provided by the user
Captids is basically a 'pull-the-plug' monitoring program that
pulls the (network and process) plug when it detects the system has been
compromised. Captids will also maintain a network monitoring buffer.
At the moment that Captids pulls the plug it will dump this buffer to
the file-system. This buffer could than be used to do postmortem analysis
on the detected partially succeeded attack.
We have been getting many requests about if we are planing to port cduck
to other operating systems than the current ones (Linux FreeBSD and Solaris).
The most requested is as expected for home-users the set of MS operating systems.
For these MS operating systems the answer unfortunately has to be 'no', MS
has demonstrated a lack of security awareness, and although it would be
very simple to implement the network functionality on a MS platform, any
server software running on the MS platform is doomed to be poor in overall
security by the lack of containment possibilities these operating systems
Cduck was originally created on the Linux platform, and on this platform it is
now in its beta stage close to production stable. Portability to FreeBSD and
Solaris has also been solved recently. FreeBSD should be considered early
beta state, and Solaris should be considered alpha state.
There are other operating systems that might be suitable to
make ports to:
Depending on demand I will try to create these ports after the release of
the first production stable release for the current platforms.