Previous Next Top Detailed TOC
This chapter outlines typical mechanisms used to implement IT Security. The mechanisms discussed are Cryptography, Access control lists, Authentication, implementation of rules & policies and availability mechanisms.
Introduction
Cryptography is the translation of information (known as plaintext) into a coded
form (known as cypertext) using a key. Cryptography is mostly used to protect the
privacy of information (i.e. limit who can access the information).
In a strong cryptosystem, the original information (plaintext) can only be
recovered by the use of the decryption key. So the plaintext information is protected from
"prying eyes". A strong encryption algorithm is one who cannot be
easily inverted on a Supercomputer today (i.e. the PC in 10 years time). There are two
principal methods of cryptography, Shared Key and Public Key cryptography.
The reference book on Cryptography is [crypto1].
www.cs.hut.fi/crypto/ [pointers to crypto SW]
ftp.funet.fi/pub/crypt [excellent: a "must visit"]
www.counterpane.com/ [Schneier: Blowfish, Twofish]
ftp.psy.uq.oz.au/pub/Crypto/ [E.Young's DES, SSL]
www.systemics.com/ [cryptix Java, C, Perl]
www.eskimo.com/~weidai/cryptlib.html [Wei Dai's C++ lib]
www.cs.hut.fi/ssh/ [Tatu Ylonen's SSH]
cwis.kub.nl/~frw/people/koops/lawsurvy.htm [Crypto+Law]
ftp://ripem.msu.edu/pub/crypt/sci.crypt/ -- sci.crypt Archives
www.swcp.com/~iacr/ -- International Association for Cryptologic Research
www.cs.adfa.oz.au/teaching/studinfo/csc/lectures/classical.html Classical Crypto Explanation
www.cryptosoft.com/snews/snews.htm [an index to lots of crypto news articles]An article written by te author for SecurityPortal on Internationally Available Strong Crypto Products in September 1999.
Discussions on keylengths:
www.counterpane.com/keylength.html (published January 1996)
An excellent article in Byte May 1998 by Bruce Schneier [crypto2].
www.ssh.fi/tech/crypto/intro.html
Both parties exchanging data have a key, this key (unknown to others) is used to encrypt the data before transmission on one side and to decrypt on receipt on the other side. There are two kinds of symmetric ciphers: Block (which encrypt blocks of data at a time) and stream ciphers (which encrypt each bit/byte or word sequentially). Sample algorithms:
Advantages: Shared key algorithms are much faster than their public key
counterparts.
Disadvantages: Both side must know the same key and they must find a secure way of
exchanging it (via a separate secure channel).
Typical applications: Encryption of information to protect privacy. i.e. local encryption of data files (where no transmission is required), data session encryption, banking systems (PIN encryption).
Both parties have a private key and a public key. The private keys
are known only to their owners, but the public keys are available to anyone (like
telephone numbers). The sending party encrypts the message with the receivers public key
and the receiver decrypts with his own private key. This is possible due to the discovery
by Diffie and Hellman (at Stanford University, autumn 1975) that algorithms can be
developed which use one key for encryption and a different key for decryption. The public
and private key constitute a key pair.
The following public key crypto-systems are well known:
Patents: Both RSA and DH the above are patented in the U.S. PKP (Public Key Partners)
of Sunnyvale, CA hold the licensing rights. Although the DH patent is now out of date
(19.8.97) and the RSA patent (only valid in the USA) only holds until 2.9.00. There are
two patents valid until 2008 affect the DSS (from Scnorr and Kravitz).
Strength: The public-key algorithms rely on difficult-to-solve mathematical problems such
as taking logarithms over finite fields (Diffie-Hellman) or factoring large numbers into
primes (RSA) to create one-way functions. Such functions are much easier to
calculate in one direction that in the other, making brute force decryption virtually
impossible (with today's computing power and decent key sizes).
Newer techniques such as Elliptical curves and mixture generators (e.g. RPK
at www.rpk.co.nz ) are promising faster public key
systems.
Advantages of PK: Only the private key need be kept secret. No secret channels
need exist for key exchange, since only public keys need be exchanged. However the public
key must be transferred to the sender in such a way that he is absolutely sure that it is
the correct public key! Public key cryptography also provides a method for digital
signatures.
Disadvantages: Slow, due to the mathematical complexity of the algorithms.
Typical applications: Ensuring proof or origin, ensuring that only the receiver can decrypt the information, transmission of symmetric session keys.
A hash function creates a fixed length string from a block of data. If the function is one way, it is also called a message digest function. These (fast) functions analyse a message and produce a fixed length digest which is practically unique i.e. finding a message with an identical hash very unlikely with very fast computers . There is no known feasible way of producing another message with the same digest. Such algorithms are normally used to create a signature for a message which can be used to verify it's integrity.
Advantages: much faster than encryption and output is fixed length (so even a
very large file produces the same digest, which is much more efficient for data
transmission).
Disadvantages:
Typical applications: Many Internet servers provide MD5 digests for important files made available for downloading. Most digital signature systems and secure email system use a digest function to ensure integrity.
An interesting variation of hashes are Message Authentication Codes (MAC), which are hash functions with a key. To create or verify the MAC, one must have the key. This is useful for verifying that hashes have not been tampered with during transmission. Two examples are HMAC (RFC 2104) and NMAC, based on SHA-1.
Applications such as PGP, S/MIME, Secure RPC (and hence secure NFS & NIS+) and SKIP use a combination public key cryptography and symmetric cryptography to ensure non repudiation and privacy. Hashing algorithms are used for (fast) generation of signatures.
There are several possible weaknesses in a crypto system, and the strength of the system is the strength of the weakest link.
The following discussion concentrates on the issue of key lengths, but strong keys are useless if the above issues are not addressed!
Computers are getting faster (computing power doubles about every 2 years), cheaper and better networked each year. All cryptographic algorithms are vulnerable to "brute force" attacks (trying all possible key combinations).
Symmetric (or shared key) algorithms:
In general, the key length determines the encryption strength of an algorithm with the
approximate formula of 2 to the power of the key length, so 56 bit keys take 65,536 times
longer to crack than 40 bit keys.
Most products come from the U.S. and are subject to U.S. export restrictions, currently either a 40bit limit or escrowing of keys.
Public (asymmetric) key algorithms:
Recommendations on key sizes: What is strong?
The encryption key size should be chosen, based on:
Attacker | Time Span | Recommended key size |
Curious hacker | Information must be protected for a few days. | Public Key 512 bits shared key 40 bits |
Curious hacker | Information must be protected for minimum 2 years. | Public Key 1024 bits shared key 60 bits |
Large organisation | Information must be protected for minimum 20 years. | Public Key 1568 bits shared key 90 bits |
Government | Information must be protected for minimum 20 years. | Public Key 2048 bits shared key 128 bits |
Here we define strong encryption as that which uses key sizes greater than or equal to:
Public Key 1568 bits (for RSA, DH and ElGamal)
Shared key 90 bits
"Strong" for new encryption system such as Elliptical curve or Quantum cryptography is not defined here, as yet.
See also the reference section above.
The "International Law Crypto Survey" of cryptographic laws and regulations throughout the world can be found at cwis.kub.nl/~frw/people/koops/lawsurvy.htm. This is changing rapidly, particularly since Sept.'99.
The U.S. and certain other countries consider encryption to be a weapon and strictly control exports. This is basically crippling the efforts to include encryption in Applications, Internet services such as Email and Operating systems.
In general the U.S. allows export of 40bit shared key systems and 512 bit public key systems.
Some countries (e.g. France), forbid encryption except when a key has been deposit in an escrow (so the legal authorities can listen to all communications if they need).
Other countries allied to the U.S. (e.g. Germany, UK, Sweden, etc.) also enforce the U.S. restrictions by allowing strong encryption domestically, but restricting export of cryptographic devices.
The OECD made a set of recommendations on international cryptography in June 1997, see www.oecd.org/dsti/iccp/crypto_e.html . Many countries have almost no restrictions, but some (especially European) countries are considering some kind of restriction of the use of cryptography in the future.
The only strong encryption software widely available internationally, known to the author of this document, are from Australia, Finland, Ireland and Russia.
A DTS issues a secure timestamp for a digital document.
Certificates are digital documents attesting the identity of an individual to his public key. They allow verification that a particular public key does in fact belong to the presumed owner. The ISO certificate standard is X.509 v3 and is comprised of: Subject name, Subject attributes, Subject public key, Validity dates, Issuer name, Certificate serial number and Issuer signature. X.509 names are similar to X.400 mail addresses, but with a field for an Internet email address. The X.509 standard is used in S/MIME, SSL, S-HTTP, PEM, IPsec Key Management.
LDAP (Lighweight Directory Access Protocol) is an X.500 based directory service for certificate management. Certain secure email products such as PGP5 have inbuilt support for querying and updating LDAP servers.
Certificates are issued by the certification authority (CA). The CA is a trusted authority, who confirms the identity of users. The CA must have a trustworthy public key (i.e. very large) and it's private key must be kept in a highly secure location. CAs can also exist in a hierarchy, which lower level CAs trust high CAs.
Where sender and receiver must be absolutely sure of who their Peer is, a CA is a possible solution. Another name for a CA is a Trusted Third Party (TTP). If both sides trust a common authority, this authority can be used to validate credentials from each side. E.g. the sender sends his public key, name (and other validifying information) to the CA. The CA verifies this information as far as possible, add it's stamp to the packet and sends it to the receiver. The receiver can now be surer than the sender is who he says he is.
The problem with CAs are that you have to trust them! However, even Banks have overcome
that problem with the implementation of SWIFT, a world wide financial transaction network.
See also:
A frequent requirement when protecting file confidentiality via encryption is Emergency File Access. If the file owner encrypts an important file and forgets the key, what happens? A second key is created, split into five parts such that any two of the five (partial) keys, when combined, could be used as a decryption key. The five (partial) keys could be kept by separate people, only to be used if the original owner was not able to decrypt the important file.
The Windows version of PGP supports these key splitting functions.
Secure data transmission is the exchange of data in a secure manner over (presumed) insecure networks.
Requirements
Secure data transmission is required for class systems or higher and can be divided into
the following categories:
Secure data transmission is achieved by the use of cryptography. There are two principal cryptographic methods, public key and shared key. Normally a mixture of both is used for secure communication.
Using Cryptography for secure transmission
When choosing an authentication system, choose a signature function and encryption method
and hash function that require comparable efforts to break.
The encryption algorithms described in the previous section can be combined together to
produce a system for secure data transmission (refer to the diagram below):
The data is prepared for transmission:
After receipt, the data is decrypted:
Example systems using this approach: Sun's Secure RPC (hence NIS+, NFS), SKIP, S/MIME isn't a million miles away either.
Ftp is available as standard on many platforms, so you may find it a convenient method of transferring data between say, an UNIX machine and an IBM mainframe. (Note: use SSH/SCP if you can, but SSH is not available on all platforms) What needs to be done to improve ftp security?
This section has been reduced, since more up-to-date articles have been written for SecurityPortal which are more comprehensive on SSH and Crypto products. Please refer to these articles:
[1] Internationally Available Strong Crypto Products, sp/int_crypto.html or the Version on SecurityPortal.
[2] All about SSH, sp/ssh-part1.html or the Version on SecurityPortal.
Netscape's secure socket layer is a "plug-in" socket layer (port 443 for HTTP
with SSL) offering client & server authentication, integrity checking, compression and
encryption. It is currently an Internet draft (not yet approved), see the TLS section below..
It is designed to fit on the transport layer in the TCP/IP stack (like Berkeley sockets),
but below applications (such as telnet, ftp, HTTP). SSL was introduced in July 1994. It is
designed for use in Internet WWW commerce applications, but also on LANs. The Netscape
Navigator and Microsoft explorer both provide support for SSL V2 and V3 (Explorer 3.0,
Navigator 3.0). Web servers supporting SSL3 include Apache & Netscape.
Algorithm:
The client connects to the server and sends a list of supported encryption algorithms. The
server replies with algorithm name, his public key, a shared key and the hash algorithm
name. The client can check if the public key does belong to that server. The client
generates a session key and sends it encrypted with the server's public key to the server.
The server decodes the session key (with his private key) and uses it to encrypt data
transmitted during the session. The client checks the server by sending random string
encrypted with the session key. The server confirms receipt.
The above authentication method can also be used by the server to authenticate the client,
however it must have a public key for the client (not the case for WWW applications).
References:
TLS (Transport Layer Security)
In 1995, the IETF started work on the adoption of SSL as an Internet Standard, known as TLS. A draft of the protocol was published in March 1997, based on SSL 3.0. Some differences are the use of HMAC instead of MD5 for integrity checking and a slightly different set of encryption algorithms that are supported. See www.consensus.com/ietf-tls or www.ietf.org/html.charters/tls-charter.html .
Microsoft's Private Communication Technology (PCT) is aimed at replacing SSL.
It is more general in nature. Authentication and encryption negotiation are separate. It
is used in Explorer 3.0. Coming from Microsoft, it is not compatible with anything else,
but could become a standard.
Microsoft proposed to the IETF (in April 1996) and Netscape a combined SSL/PCT
implementation to ensure compatible solutions for Internet commerce. See pct.microsoft.com (link out
of date & no replacement found)
S-HTTP is a extension of the HTTP protocol which can run on top of "normal" TCP/IP, developed by CommerceNet. It provides services for transaction confidentiality and works on the application layer, specifically for secure HTTP connections. CommerceNet is the CA in the current implementation.
E-commerce requires secure methods for:
Sample products:
SET (Secure Electronic Transmission)
Secure Electronic Transmission is a set of protocols for electronic commerce proposed
by VISA, MasterCard and American Express (since Feb. 1996). SET uses MIME to transport
messages. SET 1.0 was released in May 1997 and can communicate across most media, not just
TCP/IP.
Authentication: The server requests authorisation, the server key is authorised by
a CA, keys are exchanged with the client and the transaction occurs. The SET digital
certification includes an account number and public key.
SET 2.0: Version 2.0 will feature a much-needed encryption-neutral architecture that encourages the development of faster (than RSA encryption used in SET 1.0) electronic-commerce applications. Vendors such as Certicom, Apple Computer, and RPK are all positioning themselves as alternatives to RSA. Elliptic Curve Cryptosystem (ECC) is a technology that is being pushed by both Certicom and Apple.
Products:
The standard DNS has been extended to provide a parallel public key infrastructure,
with each DNS domain having a public key. The domain key can be loaded at boot or securely
transferred from the parent domain.
See also the new version of BIND ftp://ftp.isc.org/isc/bind/src/
, www.tis.com and the IETF charter www.ietf.org/html.charters/dnssec-charter.html
.
A company called AccessData (Utah, phone 1-800-658-5199, www.accessdata.com ) sells a package for ~ $200
that cracks the built-in encryption schemes used by WordPerfect, Lotus, Microsoft Office
& other products, ACT, Quattro Pro, Paradox, PKZIP, etc.
It doesnt simply guess passwordsit does real cryptanalysis.
Authentication is the process of verifying the identity of a subject. A subject (also called a principal) can be a user, a machine or a process i.e. a "network entity". Authentication uses something which is known to both sides, but not to others i.e. something the subject is, has or knows . Hence this can be biometrics (fingerprints, retina patterns, hand shape/size, DNA patterns, handwriting, etc.), passphrases, passwords, one-time password lists, identity cards, smart-tokens, challenge-response lists etc. Some systems consist of a combination of the above.
The most common methods of strong authentication today consist of one-time password lists (paper), automatic password generators (smart tokens) and intelligent identity cards.
There is no industry standard today. Many different efforts are underway. In particular the Federated Services API, GSS API and RADIUS seem like a logical ways to interconnect the current incompatible systems, without requiring vendors to throw away their existing products. It is hard to imagine such an API offering more that basic functionality however (since advanced functionality is not common to all products). The IETF also have a number of active Authentication groups:
For enterprise wide authentication and naming services DCE, NIS+ and NODS are the current main runners, with Microsoft's Active Directory service (planned for release with NT5) already generating interest for companies using NT Domains. Support for X.500 directory services will probably appear in most of these, allowing an interoperability gateway to be built. The fact that neither DCE nor NIS+ have been fully adopted in the PC client world is a pity, but perhaps reflects pricing and complexity problems.
SSH is a really impressive product for secure access to UNIX machines. It can use RSA, SecurID or UNIX user/password authentication.
For authentication across unsecured networks, proprietary (incompatible, expensive) encrypting firewalls using certificates or token based authentication are the current solution. Possible future acceptance of proposed standards such as SKIP or IPsec will, hopefully, provide long term interoperability.
Client/server applications run on many different types of systems from IBM mainframes, VMS, UNIX to PCs. Unfortunately each of these systems has it's own way of authenticating users. Database logins are normally not integrated with OS (user) logins. Usually a Username and Password identifies a user to the system. If each system and application has it's own logon process, then the user is confronted with an array of (possibly) different usernames and passwords. This poses a real security risk, as the user may be tempted to write down all the different passwords, change them rarely, or use simple ones.
The ideal solution would be to provide a secure single signon. i.e. when a user logs on to a workstation on the network, his identity is established and can be shared with any system or application. Any user can sign from at any system anywhere and have the same name and password. The user needs to remember only one password. An even more secure signon can be achieved by using Personnel ID Cards to validate the user (via a card reader on each workstation) or via hand held Smartcards (with one time passwords).
Achieving single signon is not an easy task in today's heterogeneous environment, but it would seem that Kerberos is the main contender with Sun's NIS+ also an option.
Strong authentication relies (normally) on something the user knows (e.g. a password) and something the user has (e.g. a list, smart card). Applications must support the authentication mechanism (or it must be transparent to the application). The following is a sample of strong firewall authentication methods/products.
Strong authentication mechanisms on Firewalls are very important, if protocols such as Telnet, Rlogin or ftp (writeable) are to be allowed. TCP/IP has inherent security weaknesses (confidentiality, IP spoofing) and these need to be addressed in a strong authentication product. If keys are used, key distribution must considered.
No standards exist, each product has it's own API and interoperability is often very difficult. Some Firewall authentication servers can act as glue, allowing a common database to be used for different authentication products (en example is the Gauntlet authentication server).
A basic authentication method is supported in HTTP.
Algorithm: A WWW client sends a request for a document which is protected by
basic authentication. The server refuses access and sends code 401 together with header
information indication that basic authentication is required. The client presents the user
with a dialog to input username and password, and passes this to the server. The server
checks the user name and password and sent the document back if OK.
Encryption: Very weak. The user name and password are encoded with the base64
method. Documents are sent in clear text.
NT's domains are an extension of (IBM/Microsoft) Lan Manager (LM) and are not
hierarchical, but domain based - i.e. more suitable for separate LANs.
LM authentication has several dialects: PC NETWORK PROGRAM 1.0, MICROSOFT NETWORKS 3.0,
DOS LM1.2X002, DOS LANMAN2.1, Windows for Workgroups 3.1a, NT LM 0.12, CIFS. The last two
are the most interesting as they are used in NT4.
Kerberos is a secret-key network authentication service developed at MIT by Project Athena. It is used to authenticate requests for network resources in a distributed, real-time environment. DES (i.e. shared key) encryption and CRC/MD4/MD5 hashing algorithms are used. The source code is freely available (for non-commercial version) and Kerberos runs on many different systems.
Kerberos requires a "security server" or Kerberos server (KDC) which acts as a certification authority, managing keys and tickets. This server maintains a database of secret keys for each principal (user or host), authenticates the identity of a principal who wishes to access secure network resources and generate sessions keys when two users wish to communicate securely.
There are many versions of the Kerberos authentication system:V3 (MIT), V4 (commercial: Transarc, DEC) and V5 (in beta/RFC 1510, DCE, Sesame, NetCheque). BSDI is the only OS to bundle the Kerberos server. Solaris 2 bundles a Kerberos client, which among other things allows NFS to use Kerberos for authentication.
Microsoft intend supporting a version of Kerberos in NT5, it remains to be seen how
compatible it will be with existing versions.
Entegrity Solutions (www.entegrity.com) offer solutions for
making DCE the core of enterprise security. PC-DCE interfaces to other non-Kerberos
authentication systems such as SecurID and Entrust PKI Certificates.
Kerberos is not without problems:
NIS+ is a hierarchical enterprise wide naming system, based on Secure RPC. In the default configuration it provides user, group, services naming, automounter and key distribution. NIS+ can be easily extended to define customised tables.
NIS+ is an improved version of the UNIX defacto standard NIS (Network Information System, or yellow pages). NIS & NIS+ were developed by Sun. NIS is available on most UNIX platforms, but has very weak security. NIS+ is much more secure but it only available on Sun's Solaris and recently HP-UX and AIX.
Security is based in the use of Secure RPC, which in turn uses the Diffie/Hellman public key cryptosystem.
Disadvantages:
BoKS is a full authentication/single signon package for PC and UNIX systems, made by DynaSoft in Sweden. DynaSoft is a 10 year old company employing about 50 people. The following is an extract from their home page: www.dynas.se/prod/prod_eng.html :
The BoKS concept has been developed and improved by DynaSoft since 1987. It is a comprehensive security solution covering areas such as access control, strong authentication, encryption, system monitoring, alarms and audit trails. BoKS functions in UNIX and DOS/Windows environments, offers high reliability and is ported to most UNIX platforms. BoKS can also be integrated with enterprise management systems such as Tivoli and database applications such as Oracle and Sybase.
BoKS can use Secure Dynamics SecurID smart tokens. Although the author has little practical experience with BoKS, it seems to be in extensive use where high security is required. Runs on UNIX (SunOS, Solaris and HP-UX) and PCs (Win95 & NT versions should be introduced in late 1996). BoKS uses shared key encryption (40 bit DES outside the U.S., 56bit DES in the U.S.).
S/Key is a one time password system from Bellcore. Public domain versions are also available. Features:
OPIE is a public domain release of the U.S. Naval Research Laboratory's. OPIE is an improved version of S/Key Version 1 which runs on POSIX compliant UNIX like systems and has the following additional features to S/Key:
The SecurID system from Secure Dynamics is one of the more established names on the market today. It works with most clients (UNIX, NT, VPN clients, terminal servers etc.) and many firewalls provide support for SecurID. The server which manages the user database and allows/refuse access is called ACE and delivered only by Secure Dynamics (whereas clients are delivered by several vendors). The author has used this system for providing secure remote access to hundreds of users on diverse clients.
The tokens are known are SecurID and are basically credit card sized microcomputer, which generate a unique password every minute. In addition each user is attributed a 4 character pin-code (to protect against stolen cards). When a user logs on, he enters his PIN, plus the current pass-code displayed by the SecurID token. The server contains the same algorithm and secret encryption key, allowing both sides to authenticate securely. Software tokens are available for Win95/NT as are SecurID modems from Motorola. The tokens last typically 3 years.
This form of authentication is strong, but there is a risk of a session being hijacked (for example if the one time password doesn't change often).
Cost: The Smart Token often costs about $60 (every 3 years), which may seem expensive
when many users are involved, particularly when the server software can cost an additional
$150 per user.
Stability: The author has been running an ACE server with 200 users for several years.
While it can be quirky to setup, it is rock solid and has never crashed (on Solaris 2.5 or
2.7).
The server, which supports mirroring for high availability runs on UNIX, with clients for virtually all platforms. ACE is configured via a Motif GUI that is certainly not perfect. A more useful GUI is available on the NT Remote Admin tool. ACE supports NT/RAS, ARA, XTACACS and RADIUS authentication protocols. See www.securitydynamics.com/products/datasheets/asvrdata.html or www.securitydynamics.com . For some syadmin notes see www.livingston.com/tech/docs/radius/securidconfig.html.
Safeword by Secure Computing www.securecomputing.com is direct competition for ACE/SecurID. It's servers run on UNIX. It supports many authentication protocols such as TACACS, TACACS+ and RADIUS.
Many token types are supported: Watchword, Cryptocard, DES Gold & Silver, Safeword Multi-sync and SofToken, AssureNet Pathways SNK (SecureNet Keys). See also www.safeword.com/welcome.htm .
This one time password system from Racal Guardata that are well established competition to the SecurIDs. It works basically as follows:
Attacks could occur in the form of chosen plaintext guessing. Racal Guardata also produce the Access Gateway.
This system from AssureNet Pathways may be of interest to those using NT servers, since the server runs on NT (not UNIX like most of the above). Features: Authentication via ARA, NT/RAS, TACACS+. Multiple servers are possible via database replication.
The token used are SecureNet Keys (SNK) hardware or software tokens. The challenge/response authentication uses DES, the PIN is never transmitted over the network and sensitive information is encrypted. See also www.axent.com/product/def2.htm
Merit Network and Livingston developed the RADIUS protocol for identification and authentication. There is an IETF working group defining a RADIUS standard.
XTACACS is an enhancement on TACACS (Terminal Access Controller Access System), which is a UDP based system from BBN which supports multiple protocols. SLIP/PPP, ARA, Telnet and EXEC protocols are supported.
Also an enhancement on TACACS (from CISCO), but not compatible with XTACACS or TACACS. It allows authentication via S/key, CHPA, PAP in addition to SLIP/PPP and telnet. Authentication and authorisation are separated and may be individually enabled/configured.
PAP (password authentication protocol) involves the username and password being sent to a server in clear-text. The password database is stored in a weakly encrypted format. CHAP (Challenge Handshake Authentication Protocol) is a challenge/response exchange with a new key being used at each login. However, the password database is not encrypted. Some vendors offer variations of the PAP and CHAP protocols but with enhancements, for example storing passwords in encrypted form in CHAP.
An ACL defines who (or what) can access ( e.g. use, read, write, execute, delete or create) an object. Access Control Lists (ACL) are the primary mechanism used to ensure data confidentiality and integrity. A system with discretionary access control can discern between users and manages an ACL for each object. If the ACL can be modified by a user (or data owner), it is considered to be discretionary access control. If the ACL must be specified by the system and cannot be changed by the user, mandatory access control is being used. There is no standardised ACLs for access to OS services and applications in UNIX.
To secure a particular environment, mechanisms are required which allow the rules and policies to be implemented. Implementing rules and policies network wide on UNIX machines is not easy and often requires development of scripts. Another possibility is the use of a tool such as Tivoli, which is designed to implement rules and policies in a networked heterogeneous environment.
NT allows setting of some, but not all rights per user. It takes a very different approach to UNIX in this area. (See the chapter "NT"). Implement rules as policies across a network of servers is not supported by standard utilities either.
Things to watch out for:
The computing environment can be protected with Air Conditioning, locked server rooms and UPS (220V protection).
Redundancy increases availability and may be implemented in hardware (RAID), disk drivers or OS (RAID) or at the application/service level (e.g. Replication, transaction monitors, backup domain controllers).
This is often the cheapest and easiest to implement, where available. The principle problem is that few applications support this type of redundancy. Clients connecting to these servers automatically look for a backup or duplicate server if the primary is not available.
The classical method of increasing system availability is by duplicating one of the
weakest part in a computer: the disk. RAID (Redundant Array of Inexpensive Disks) is a
de-facto standard for defining how standard disks can be used to increase redundancy. The
top RAID systems duplicate disks, disk controllers, power supplies and communication
channels. The simplest RAID systems are software-only disk drivers which group together
disparate disks into a redundant set.
There are several RAID levels:
Things to watch for in RAID systems:
If applications do not provide built in redundancy, special software (and perhaps hardware) can be installed on two systems to provide Hot Standby functionality. The principle is as follows: Both systems can access shared (high availability, dual ported) disks and have duplicate network connections. The backup machine monitors the primary constantly and if it notices that the primary is no longer functioning, it takes control of the shared disks, reconfigures it self to have the same network address as the primary and starts up the applications that were running on the master. Of course this with only work with certain applications e.g. if the primary crashes and it's principal application thrashes it's configuration or data files in doing so, the backup server will not be able to start the application.
A example of this is IBM's HACMP product, or Sun's HA cluster.
Specialised computer systems offer compete redundancy in one system i.e. CPU, memory, disks etc.. are fully duplicated. A single point of failure should not exist. These systems often require specially adapted Operating Systems, cost a fortune and are rarely compatible with mainstream systems. Rarely used in the commercial arena, they are most reserved for military or special financial use.
An example is the Stratos line of systems.