14. Databases & Transaction Monitors
The X/Open DTP (Distributed Transaction Processing) standard is
based on USLs TUXEDO and is an open standard for OLTP (Online Transaction
Processing). Transaction processing is an application programming system which
defines and co-ordinates interactions between multiple users and multiple databases or
other shared resources. DTP extends transaction processing to include multiple resources
and networked machines. On-line transaction processing is that part of DTP which manages
the logging and execution of individual transactions.
DTP can be described as middleware which allows a (possibly transaction
oriented) application to be distributed across numerous machines in a heterogeneous
environment.
DTP is comprised of 3 modules:
1. AP = Application
2. TM = Transaction Manager
3. RM = Resource Manager (often a front end to a database). The RM must be
capable of doing two phase commit (2PC) and support the XA protocol.
The following protocols are defined in the DTP standard:
- XAPTMI = AP to TM protocol
- XA = RM to TM protocol. The TM always initiates the connection, but an RM can register/
unregister itself with the TM. XA may be synchronous or (optionally) asynchronous. XA
makes 2PC possible through such functions as xa_start(), xa_commit(), xa_complete(),
xa_recover(), xa_rollback() .
- The server part of the AP talks to RM via SQL or other protocols.
- TMS = Transaction Management System is required for the implementation of 2PC over
distributed databases.
- TX = AP to TM protocol for defining and managing global transactions. It supports
chained and unchained transactions. e.g. tx_begin(), tx_commit(), tx_rollback().
- TxRPC = AP to CRM protocol, based on DCE RPC.
- XA+ = TM to CRM protocol. Based on XA, defines the verbs necessary for the RM to
initiate communication with the TM. May never become a standard, controversial, voted down
in the June 1993 meeting of the TP working group. USL doesn't want it.
XATMI is a subset of Tuxedo's ATMI - there are no transaction, authorisation,
queueing or forwarding functions. XATMI defines buffertypes X_OCTET (equivalent to Tuxedo
CARRAY), X_C_TYPE (equivalent to Tuxedo VIEW) and X_COMMON (similar to X_C_TYPE but used
for both COBOL and C). The Tuxedo FML buffer type is not part of the standard.
TxRPC is a modified RPC: to support transactions, both restrictions and new
features (transactional RPC) have been added. There are two types: one with full DCE
implementation and IDL only (no DCE runtime). Transarc's Encina uses TxRPC.
Other notes:
Stored messages: a facility whereby request messages are written to stable
storage for later processing. Provides greater recoverability and reliability (each
request is guaranteed to be executed exactly once) and is more "mainframe like".
Four queue types exist: request, reply, failure and error queues. TMS_QM is the
TM server for stored messages, TMQUEUE is the server which manages the queue and TMQFORWARD
allows forwarding of requests to another service.
CPI-C is the X/Open interface to LU6.2.
Peer-to-peer: allows half duplex communication between APs.
OSI TP: Provides transaction semantics to the OSI protocol and services.
XAP: is an API for connection to services in the presentation layer of the OSI
protocol stack. It provides for portability of OSI applications such as X.400, FTAM,
Directory Services, Network Management, VPT.
XAP-TP is an interface to the OSI TP service element and to the presentation
layer. It is an extension of the XAP protocol.
- X/Open CAE Specification: "Distributed Transaction Processing: The XA
Specification". (ISBN 1 872630 24 3, 1991)
- TX Specification (preliminary): ISBN 1 872630 650
- XATMI Snapshot specification: ISBN 1 872630 804
Open/OLTP is Unisys's implementation of the X/Open DTP standard. IML are the actual
developers of UNIX and PC versions of Open/OLTP, hence Open/OLTP = IML Tuxedo.
Illustration:
Apparently global transactions with IMS Hosts are not yet possible as no interface to
LU6.2 (syncpoint level 2) exists.
- Open/OLTP runs on UNIX, Unisys & MVS mainframes and Windows NT. DOS/Windows PCs can
have a greatly reduced OLTP client called Tuxedo/WS (from IMC) or TDWIN (Transactional
Desktop for Windows, from Unisys). Tuxedo/WS connects to a WSH on a UNIX machine to access
OLTP services (V4.2 or later). Macintoshes can also access OLTP.
- The Master BBS server cannot be a PC client.
- In Tuxedo, the TM is not a process. TM functionality is achieved by the use of a distributed
bulletin board which contains information on what services and servers are
available, where. The master OLTP server is responsible for the distribution and
maintenance of the bulletin board (via the BBL - Bulletin Board Liaison).
- Three modes of communication are available: synchronous tpcall(),asynchronous tpacall()
and conversation tpsend() mode.
- Multi-node configuration is only required where service calls must be made to physically
separate machines.
- System and application messages can be compressed when the environment variable TMCMPLIMIT
is set. Recommended. See confidentiality section below.
- Open/OLTP should be 100% compatible to USL's Tuxedo.
- The DTP X/OPEN standard does not contain some features in Tuxedo, such as the (very
useful) FML fields.
- Tuxedo defines buffer types CARRAY (equivalent to X_OCTET), VIEW (equivalent to
X_C_TYPE). The X_COMMON buffer (similar to VIEW but used for both COBOL and C) is not
available. The FML buffer type available in Tuxedo is not part of the DTP standard.
- Databases supported: Oracle 7, Informix, Sybase 10 (apparently yes), Sybase 11
(planned?). Unknown: MS-SQL Server V6
Refer to the original USL Tuxedo documentation. The author used "U6000 Series
TransIT Open/OLTP Transaction Manager - Administration Guide" from Unisys
(7844 9709-000), November 1994 (this is version 4.2.1).
Transaction integrity is guaranteed if the RM implements the two phase commit (2PC)
XA protocol, the client correctly uses the ATMI 2PC function calls (e.g. tpcommit())
and the services controlling the RM correctly use the XA 2PC function calls (e.g. xa_precom(),
xa_commit() ). The client and the service can abort their respective transactions
and be sure that rollback occurs.
The 2PC protocol used by XA requires the use of a transaction log (TLOG) for
the second phase of the 2PC. TM uses TLOG for recovery of current distributed
transactions. There is one TLOG per OLTP server. The TLOG can either be
a raw device or a file.
See also the section Secure data exchange: peer entity authentication.
- UNIX Login security: The OLTP server should be started under a dedicated user e.g. tuxedo
or tm4-2-2. If possible this account should be locked. It's home directory should be
the directory (e.g. $APPDIR ) where all OLTP server and service configuration
files (e.g. UBBCONFIG) are found.
- The umask should be set for this user to 077, to ensure that the ULOG
and TLOG file have permissions rw---------- .
- All log files should have permission 0600 and be regularly monitored, archived and
pruned.
- It is suggested that all log files be kept in a separate partition, such as /var.
- The UNIX system logs should ensure a record of attempts to access the UNIX accountsOLTP
operation and administration (e.g. via syslogd).
- The OLTP server process writes server errors to the (ASCII) ULOG file
(one exists on each server machine). Applications can write to this log via the userlog()
function. A ULOG is created for each day. The ULOG path is specified in SERVERS/ULOGPFX
in the UBBCONFIG file.
- The server can also log services called to ULOG (UBBCONFIG
option CLOPT -r, see man txrpt). This could help audit trail for
understanding synchronisation of different service calls.
- Services write to stdout and stderr files by default. If these are
used, they should be renamed to indicate service names (can be specified with the CLOPT
variable in the SERVERS UBBCONFIG file).
- It is suggested that there be a special audit log for each application: Each
application server would write two entries in the log for each call from a client. E.g.
when a client calls a service, the service writes: User, date, time, client IP address,
function, function parameters to the log. When the service has done the work requested by
the client, it writes another entry to the log detailing the information returned to the
client. (Is it possible for a service to get the client IP address?).
- Bulletin Board protection: The PERM variable specifies the permissions
used when creating IPC shared segments. Read/write for owner only are recommended. The
command ipcs lists all shared segments and their permissions. The UBBCONFIG
file should be configured as follows:
*RESOURCES
PERM 0600 [bulletin board & request queues: IPC
permissions]
SYSTEM_ACCESS FASTPATH [Unisys say PROTECTED is not usable in production]
*MACHINES
PERM as RESOURCES/PERM
*SERVICES
RQPERM as RESOURCES/PERM
RPPERM as RESOURCES/PERM
SYSTEM_ACCESS as RESOURCES/SYSTEM_ACCESS
- The SYSTEM_ACCESS variable can have two values (V4.2 or later), PROTECTED
(the shared memory for Tuxedo internal tables is not accessible from code outside the
Tuxedo libraries) or FASTPATH (internal shared memory is not protected). Unisys
have indicated that FASTPATH must be used, but this is possibly because they were
still using 4.1. (Unisys: please check!). For maximum security, PROTECTED would
be better.
- Request queue and reply queue permissions are controlled by the RQPERM and RPPERM
variables. These should be set to 0600 (as PERM above). If these
parameters are not specified, the value in PERM is used.
- File System: File and directory permissions must be set restrictively for $APPDIR
directory (e.g. ~tuxedo). i.e.
chmod 700 ~$APPDIR
chmod -R w-rwx,g-rwx ~$APPDIR/*
chown -R tuxedo ~$APPDIR/*
- The OLTP binaries directory (e.g. $ROOTDIR ) normally belongs to root and is
only writeable by root.
chmod 755 ~$ROOTDIR
chmod -R w-rwx,g-rwx ~$ROOTDIR/*
chown -R tuxedo ~$ROOTDIR/*
Reuse of objects for covert data transfer should be prevented by the measures above for
UNIX login, filesystem and shared memory.
The client can use the ATMI function tpchkauth() to check the level of
security required for an application. Then the client fills the TPINIT buffer
with the required security information and sends it to the server via tpinit().
OLTP offers very little security as standard, however it provides an open framework in
which an application can implement strict security functionality. Three authentication
methods are offered: service based, client based and customised.
=> Server based access (OLTP "Level 2" security)
- A password is required for access to a server. Set the following in UBBCONFIG:
*RESOURCES
SECURITY APP_PW
When UBBCONFIG is processed by tmloadcf, a password will be demanded.
Any client connect to this server must provide this password in the TPINIT
buffer. This password may be changed dynamically via tdadmin -> passwd
or tmconfig.
- OLTP sends the password (i.e. part of the TPINIT buffer entry) over the network bit-mask
encrypted (i.e. very weak encryption).
=> Client based authentication (OLTP "Level 3")
- Level 2 must be enabled.
- Initial client authentication: The client must supply an authentication code
before being allowed to access a server service. An authentication server must be defined
(AUTHSVC). It is recommended that class systems implement this security function.
The UBBCONFIG file should be configured as follows:
*RESOURCES
AUTHSVC auth_server [name of auth. service, max. 15 chars]
*SERVERS
,,auth_server" [authentication server program name]
- Session authorisation: After the initial authentication, an authentication key is
exchanged with each buffer transferred (appkey). This (4 byte) session key is
checked during each communication (transparently to the application). Most modern session
keys are at least 16 bytes in length, it is doubtful whether this 4 byte key is sufficient
for protection of confidential transmission.
- A default authentication service AUTHSVR is provided. This service checks users against
a UNIX style password file. See the AUTHSVR(5) man page. TBD
- For class systems, it is recommended not to
use the default authentication server, but to use a well known system such as Kerberos or
NIS+. However, the session key remains a weak 4 bytes, so it may be preferable to forget
the OLTP authentication mechanisms and implement a customised solution.
=> Customised authentication/authorisation
- A few special tpcall() after tpinit() can be used to implement
customised authentication.
- One (or all) of the parameters passed from client to server could be encrypted using a
commonly known session key.
- Another option would be to replace the standard field encoding/decoding routines with
special routines which implement additional encryption or signatures. However, this may
require source code knowledge of the original encoding/decoding routines.
- An application can implement customised encoding/decoding of transmitted data by
defining new communication buffer types (by adding an instance to the tm_typesw[]
array).
=> Secure naming services such as NIS+ or Kerberos (and hence DCE) can be used for
authentication, if a front end is written to OLTP and installed as an AUTHSVC
service. Special authentication services can also be used (Unisys have already implemented
an authentication server ZKM for the Schweizerische Aussenministerium).
The integrity of data transferred between OLTP client and server is guaranteed by TCP
sockets. No additional measures (such as checksums) are implemented.
- An application can implement additional integrity checking by defining new communication
buffer types (by adding an instance to the tm_typesw[] array).
- Multi-node: During the booting of a multi-node configuration (tmboot), the
configuration file is copied from the master to all servers. This ensures that all servers
have a integral view of system configuration. However, this feature slows bootup time.
The password information in the TPINIT buffer is bit mask encrypted
before being sent over the network. This is not enough for sensitive applications,
therefore application password and authentication code should be encrypted before being
written into the TPINIT buffer. The encryption mechanism should be such that playback
is not possible and should not be easily decrypted by brute force. Public key algorithms
for generating an unique session key depending on time / user names / IP address / host
names are recommended.
System and application messages can be compressed (Tuxedo 4.2.1 or later) with
the environment variable TMCMPLIMIT. Especially useful for low speed networks and
reducing application boot time. Compression strength can be set for local and remote
messages separately. By setting remote compression, but no local compression, data does
not appear in clear text during network communications. This offers very simple
confidentiality against casual network sniffers. To implement this functionality, set TMCMPLIMIT=0,MAXLONG.
Recommendation: use both local and remote compression.
Data origin is known, if "OLTP level 3" security is used. It should be noted
that TCP/IP has very definite weaknesses in the area of authentication (IP spoofing).
Digital signatures are not defined in OLTP, however they may be implemented on the
application level.
- Static load balancing is possible if the OLTP administrator correctly defines the SERVERS/LOAD
variable in UBBCONFIG and LDBAL=Y. Each service can be given a weighting
factor and OLTP uses this weighting factor plus queue length to decide where to queue
requests.
- TMNETLOAD allow a bias to be set for local services over remote services.
- Queue definition is also important. Queuing can be:
MSSQ: multiple services, single queue
MSMQ: multiple services multiple queues (each service has it's own queue).
- In an MSSQ environment, real time load balancing (i.e. based on bulletin board
statistics) is used if MODEL=SHM, for V4.2.1 or later.
- If MODEL=MP, in an MSMQ environment, round-robin balancing is used.
- A server can examine requests waiting in it's queue, and do it's own load
balancing/prioritisation.
- OLTP allows data dependant routing of services i.e. several servers may offer a
particular service, when a client requests use of a particular service, the TM will see
what servers offer this service and route the request according to data content. The rules
for routing are very flexible.
- An application can implement customised routing by defining new communication buffer
types (by adding an instance to the tm_typesw[] array).
- Data dependant routing is possible by defining the SERVICES/ROUTING variable in
UBBCONFIG.
- If an OLTP node (in a multi-node configuration) goes down, the master server would
notice after a certain time (specified by the SANITYUNITS*SANITYSCAN parameters[1] in UBBCONFIG) and update the bulletin board. If duplicate
services are offered on other servers, all new requests will be routed to these other
servers.
- A backup master OLTP can be used. i.e. the UBBCONFIG can have a entry of the
form:
*RESOURCES
MASTER master_machine_name,backup_machine_name
- For single node configuration this is not possible, so redundancy would have to
be provided on the system level.
DOS/Windows PCs do not have a full OLTP implementation. TDWIN (the OLTP client)
requires access to a UNIX machine with a WSH process. There is no way of specifying a
backup WSH server in the OLTP protocol. To provide redundancy, the following are possible:
- The UNIX machine providing WSH service for PCs must have high availability.
- The PCs must be configured to reboot with a backup configuration which points at another
WSH server.
- The client application SW implements logic such that it tests for WSH availability and
automatically switches to another WSH server. This increases client SW complexity,
however.
From V4.2.2. there is a time-out on the server side, so if a PC client hangs up, the
server will close open connections to this PC after a certain time-out.
Oracle 7.0.12 is XA compliant. The XA interface allow OLTP client to access an OLTP
server (the Oracle RM) which can pass on requests (SQL) to an Oracle server. The Oracle RM
runs under one user (specified in UBBCONFIG), so multiple OLTP servers must be
running under separate users, if db access under different usernames is required.
- The OPENINFO string (Acc=P/USER/PASSWORD ) defined in UBBCONFIG
can contain a user account and password in clear text! è Protect this file (UBBCONFIG),
if possible once it has been used to generate TUXCONFIG(it's binary equivalent),
encrypt UBBCONFIG and remove any clear text copies. UBBCONFIG does not need to
exist on production systems.
- The OPENINFO string Logdir= is used to specify where XA error and
tracing information is logged. The default log directory is $ORACLE_HOME/rdbms/log.
The default file name is xa_DBNAME_MMDDYY.trc. The Logdir must exist and
be writeable.
- The OPENINFO string SQLnet = is used allow logins over SQLnet V1 or V2. è Use
V2.
Unisys's HIT tool is not a TM, but is mentioned here because it uses Open/OLTP. HIT
interfaces to classical mainframe applications by telnet or 3270. It translates terminal
oriented information into transactions. This conversion takes place on an Open/OLTP
server. Clients normally access HIT server services via OLTP, but a direct connection via
the SThandler protocol is also possible (though it is not discussed here).
- HIT can be cleanly installed/deinstalled using the SVR4 package commands.
- An account for the "Application manager" and a group ST is created
during installation.
Depends on how scripts & client software are written. No implicit transaction
integrity is offered by HIT.
See the section Secure data exchange: peer entity authentication.
- It is recommended that all log files be kept on a special partition (i.e $SPOOLDIR
= /var/hit) and be regularly archived and pruned.
- HIT offers the following utilities for logging, debugging and monitoring. The services ALARMserver
and LOGserver are recommended.
I. LOGserver
This server is called if a LOG statement is included in a transaction or function
script. If LOGserver is not active, all messages are lost. Messages are logged to
$LOGPFX.mmddyy if they have priority $LOGLEVEL or higher. Multiple
servers are advised to assure that messages are not queued too long.
II. STATserver
STATserver receives statistics information from the transaction-handler and
server. If STATserver is not active, all statistics are lost. Messages are logged
to $STATPFX.mmddyy if they have priority $STATLEVEL or higher. Multiple
servers are advised to assure that messages are not queued too long. Note: Statistics
files can grow quickly!!
III. ALARMserver
This server is called if a ALARM statement is included in a transaction or
function script. If ALARMserver is not active, all messages are lost. Messages
are logged to $ALARMPFX.mmddyy and displayed on the ALARM device (e.g. console)
if they have priority $ALARMLEVEL or higher. Multiple servers are advised to
assure that messages are not queued too long.
IV. stmon
The server and transaction processes can be checked while they are active with the stmon
tool.
V. tracing
During the development and debugging stages, tracing can be useful. If TRACE is
set to greater than 0, then tracing information is sent to TRLOGFILE.
VI. Recv & send debugging
In $PROJDIR/envfile.app, DEBUGFILE can be used to specify the file
prefix for recv(), send() and ID statement logging.
Application audit trail logs: see guidelines in OLTP section.
Different UNIX users are required when running HIT:
- "Installation user": The HIT application should be installed as a dedicated
user(s) e.g. hit1-0. If possible this account should be locked. It's home
directory should be the directory where all HIT binaries (e.g. /opt/hit-1.0) are
found.
- "Application manager": This user controls the OLTP system application and must
belong to the ST group. It's home directory is $PROJDIR.
- "Domain manager": HIT servers run under a special user, e.g. hit_domain
whose home directory is $DOMAINHOME, where server configuration files (e.g. UBBCONFIG)
are found. This user must be a member of the ST group and is primarily for
administration.
- "HIT user": Client requests run under this user, e.g. hit_client,
whose home directory is $DOMAINHOME. If possible this account should be locked.
This user must be a member of the ST group.
- It is possible that the above users 2,3 and 4 be one user with home directory
$DOMAINHOME for simple installations.
The umask must be set for these users to 077, to ensure that files created by HIT
have permissions rwx------ . (TBD, perhaps 027 is necessary rwxr-x--- ?)
File and directory permissions must be set restrictively for the application and server
directories.
chmod 750 $PROJDIR
chmod -R w-rwx,g-w $PROJDIR/* /etc/domainname.map /etc/stconfig
chown -R hit.ST $PROJDIR/* /etc/domainname.map /etc/stconfig
chmod 750 $DOMAINHOME
chmod -R w-rwx,g-w $DOMAINHOME/*
chown -R hit_domain.ST $DOMAINHOME/*
If $SPOOLDIR is set to a directory only used by HIT, then logs should be
protected by use of umask (see above) and the directory should also be protected:
chmod 770 $SPOOLDIR
chown hit.ST $SPOOLDIR
Reuse of objects should be protected by the measures above for UNIX login and
filesystem. Shared memory must also be protected (see OLTP chapter).
Since the HIT servers are started from inetd, it should be possible to
restrict client access by IP address if the tcp wrappers are used and DHCP is not
used.
HIT has it's own authentication server (AUTHserver), which uses two (ASCII)
password files in $DOMAINHOME, one for host accounts (serv_passwd,
managed by stpasswd -s) and one for client access (trans_passwd, managed
by stpasswd -t). The host password file lists accounts and passwords on the host
access via telnet/3270.
AUTHserver offers three services:
- The AUTH service is called by SThandler on transaction #1 and verifies
username/password against the trans_passwd file. If all is OK contact is made
with the OLTP TM and subsequent transactions are allowed.
- The transPSW service is called after the AUTH service above is called.
It checks password aging (PASSREQ, MAXDAYS) and passes additional
parameters ${PSW[0-0]} are passed back to the transaction script.
- The servPSW reads $USER and $PASSWD and $PSW[0-9]
from serv_passwd on the basis of server name and id and returns these values.
Weaknesses:
- Only authenticates initial TPINIT connection, not each communication.
- It uses flat files for passwords (not distributed databases) the same as the standard
UNIX password file.
- Does not use the standard 4 byte session key.
- Telnet/3270 (hence sockets) is the only guarantor of integrity of data transmitted to
the Hosts.
- OLTP guarantees data integrity of data transmissions to clients (via TCP sockets).
- Data transferred between HIT and the hosts (via telnet) is in clear text -
even usernames and passwords. This is due to the telnet / 3270 protocol.
- Data transferred between HIT and clients go over OLTP. Passwords are weakly encrypted,
but usernames and data are not.
Digital signatures are not defined in HIT or OLTP, however they may be implemented at
the application level.
HIT offers no additional redundancy to that offered by OLTP.
- Install known vendor security patches.
- Documentation: specify where detailed information on Security topics for each database
can be found.
- use a database certified to military security
or do not install the database on a networked machine.
Consider using a TCSEC evaluated database. The following table lists the databases
evaluated by the NSA in Spring 1996 [nsa1]. See Appendix C for a more detailed discussion
of TCSEC. C2 is the TCSEC level aimed for by most commercial systems.
Even if a system is evaluated to a certain level (e.g. TCSEC C2), it still requires
careful configuration, monitoring and organisation processes for it to be considered
"secure" in a real production environment. Don't attach too much importance to
the "label" C2 for it's own sake. It is often used as a sales pitch without real
substance. E.g. a system may offer "C2 auditing", but that doesn't mean that the
audit logs are useful, or that tools for high level analysis of these logs are included in
the system, or that anyone actually reads the logs!
Database |
Level |
Cert. date |
Notes |
Informix Online/Secure 5.0 |
B1 |
15.11.94 |
|
Trusted Oracle 7 |
B1 |
5.4.94 |
|
Secure SQL Server, V11.0 |
B1 |
18.5.95 |
Sybase |
|
|
|
|
SQL Server, V11.0.6 |
C2 |
13.10.95 |
Sybase |
Informix Online/Secure 5.0 |
C2 |
15.11.94 |
|
Oracle 7 |
C2 |
5.4.94 |
|
- Database engines protect data integrity with their rollback/rollforward
recovery mechanisms. Data integrity manipulated by an application is guaranteed when the
application correctly uses the begin transaction, rollback transaction
and commit SQL commands.
- The two phase commit commands can guarantee the integrity of distributed
databases, if used correctly in applications.
- Referential integrity in relational databases is enhanced by the use of triggers
(Sybase, SQL Server, Oracle).
See also the "Policy" chapter for general rules.
- Use different passwords for database administrator (sa) and OS system
administrator.
- Document non standard installations.
- Monitor logs regularly for security breaches or strange behaviour.
- Applications should keep a centralised
record of who did what, from which terminal, on what machine, when, with what object and
whether successful or not.
- The database engine should be started under a dedicated user e.g. sybase. If
possible this account should be locked. It's home directory should be the directory where
all database configuration files (e.g. /opt/sybase) are found.
- Unix Servers : The umask should be
set for this user (e.g. sybase) to 077, to ensure that files (logs, dumps...)
created by the database have permissions rwx------. Although if group access is
required, then umask 027 may be necessary).
- Unix servers: If filesystems are used, scramble the inode numbers (via fsirand)
to make it more difficult to read the raw device directly (Suns).
- Set ownership/permissions of files/raw devices
restrictively - only the database should be able to read or write these devices/files.
- Database administration scripts should only be
readable by the database administrator.
- Database administration scripts containing
clear text passwords should be encrypted, see [unix1].
- Don't use passwords on the command line of
utilities, they are visible in the process table list. This can be avoided by entering the
password interactively or redirecting stdin from a file.
- Create a (UNIX) group (e.g. sybase) for those users who need access to database
tools directly.
- File and directory permissions must be set restrictively for the database home
directory (where the configuration files & binaries are kept). E.g. for user ~sybase
on an Unix server:
chmod 750 ~sybase
chmod -R g-w,o-rwx ~sybase/*
chown -R sybase.sybase ~sybase
Views and SPs can serve as security mechanisms. A user can be granted permissions on a
view or stored procedure, even if he/she has no permissions on objects that the view or
procedure accesses. Through a view, users can query and modify only data they can see. The
rest of the database is neither visible, nor accessible.
Objects used by a subject must be reinitialised before being used by another subject.
- If SQL clients can access the database directly over sockets/TLI/RPC/named pipes, what
measures are taken to restrict access? Can any machine with the corresponding SQL client
software connect to the server and be presented with the database login prompt?
- If possible, users should be preventing from directly accessing databases via tools such
as ODBC, isql, dwb, Gupta SQL plus etc. One method of doing this is to
use a two-pass password encryption scheme.
A user enters a password to access an application database via an application. The
application encrypts this password to form a second password. This second password is the
actual password used by the database access routines. The database knows only the second
password, while the user knows only the first password - therefore the user cannot access
the database directly (even if he has the tools available) since he has no valid password.
It is important that the encryption algorithm used by the application not become known.
This method can be applied to any database.
Guaranteed by the transport protocol used (e.g. TCP sockets, Named pipes...).
Are passwords and usernames passed in clear text over the network between the SQL
client and database?
Guaranteed by the transport protocol used (e.g. TCP sockets, Named pipes...), plus the
challenge response method used on initial connection.
Digital signatures are not normally offered by databases, they can be implemented on
the application level.
Basically a full backup of all databases and transaction logs would be nice each day.
However it is rarely possible due to performance (dumping a 50GB database can take a
while...), costs (disk space, jukeboxes) or time (the night is not long enough for
updating, checking and backing up) reasons.
- A backup and restore policy must exist and be regularly tested. Restores, especially,
need to be tested.
- Specification of a recovery time is the first step in choosing a backup policy.
- It takes longer to restore a database and transaction logs, than just a database.
- Backups (database dumps) may have to be made to
disk, rather than tape to ensure minimum downtime. Dumping 2GB of data may take
only 30 minutes, but restoring a 6GB Sybase database from fast disks (barracudas) may take
8 hours (5 with a Clariion RAID), a tape would take 4 or 5 times longer.....
- If backups are made to disk, store on a separate
disk to either data or transaction logs .
- Use a database which allows on-line backups, i.e. backups can be made when the db is
active.
- The transaction log is normally backed up more frequently backed up than the database
for speed/space reasons. Backing up the transaction log captures changes made since the
last database dump.
- Databases should be backed up after creating a database or index, or after performing a
nonlogged operation (dump transaction with no log or with truncate_only,
fast bulk copy). Sybase for instance, will not allow transactions to be dumped after
such an operation unless a database dump has been carried out.
- The master database rarely changes (on a production db, perhaps only when users
or disks are changed), so backups can be less frequent than live databases.
Quotas, CPU, memory limits etc. per user are available with some databases.
Certain databases offer replication of data between servers. This feature can be used
to improve availability.
- Use separate disks for database data and OS.
- Use separate devices for database data and
transaction logs.
- Some type of disk or database redundancy (RAID
or replication) is required.
- If disk mirroring is used, mirror to separate disks (and separate controllers if
possible).
- Performance as well as availability is affected by RAID configurations. While mirroring
offers more redundancy than RAID 5, it takes more disk space and may be slower.
See also general database recommendations.
4.9.x is very similar to Microsoft's SQL V4 (because MS bought 4.9 for OS2 & NT
from Sybase!). Refer to the MS-SQL section until this section is complete, for
recommendations.
Known security problems
- No log is kept of successful or failed login attempts.
- No socket protection (Could install socket monitor monitor_socket.pl), anyone
with an SQL client (isql, dwb) can connect to the db and be presented
with the Sybase login.
- Easy to "sniff" client/server login process.
- User passwords are stored in clear text in the master..syslogins
table, visible to the sa. This means that the passwords are also stored in clear
text on disk - meaning that if one has the right to read the Sybase raw devices, then the
following command could possible list passwords:
dd if=/dev/rdsk/c?d?t?s? | strings | egrep "mastersa|masterMYUSERNAME"
- Sybase allows on line backups, but not to pipes and compression is not supported. This
issues (and other enhancements) are addresses in the 3rd party backup utilities
"dttools" (from Datatrack??). Recommended for large (> 2GB) databases.
- Logging is much more detailed, User actions can be logged down to the statement level.
- Passwords are encrypted in the syslogins table and on disk.
- TBD
See also general database recommendations.
- Analysis is based on SQL server 4.12.
- SQL server V6 is a feature rich DBMS based on Sybase 4.9 with many security options.
- Tools: Many GUI & command line utilities are included which are useful for
managing the database & system e.g. SQL Security Manager, SQLadmin, ISQL/w, isql,
console, SQL monitor, SQL service manager, NETSQL, SQL object manager, SQL Tape Utility,
NT Perfmeter, NT event log & alerts.
- Known security problems:
- Easy to "sniff" client/server login process (when socket connections are used)
?
- User passwords are stored in clear text in the syslogins table, visible to the sa.
- The xp_cmdshell extended stored procedure allows a user to execute any file
system command. Imagine nice things like formatting the disk! TBD: How can this
functionality be restricted??
- No password aging mechanism.
The hierarchy of users is sa (system administrator), dbo (database
administrator), doo (database object owner) and users. The sa is a superuser
who works outside the permissions system, so it is very important to protect this account
from unauthorised access.
SQL logon can be configured for standard, integrated or mixed modes.
- Integrated: The NT login validation system is used by SQL server. User accounts
defined in NT which are assigned user level privileges in SQL server can directly access
the database without entering any additional username or password. Only trusted
connections are allowed into SQL server. NT users who have Administrator privilege are
logged into SQL server as sa.
- Standard: SQL server manages it's own login validation (i.e. usernames and
passwords) independently from the operating system. This is the default.
- Mixed: Logins are first treated as in integrated mode the as in standard mode.
This is useful where not all users connect via named pipes or are not logged onto an NT
domain.
Even if a user has an SQL login, he does not have automatic access to databases. The
database owner must add the user to each database (sp_adduser).
Tools: xp_loginconfig displays the current login setup. xp_logininfo
shows accounts and their login configuration.
- Integrated or mixed modes are preferred, since they offer unified user administration
and authentication and no passwords traverse the network during the SQL logon.
- Default domain should be set in integrated and mixed modes (during
installation).
- Immediately after the SQL server is installed,
the sa password must be changed (it is NULL by default), e.g. via the SQL command:
sp_password null,NEW_PASSWORD,sa
- No default login should be used. If
one is necessary it should have very restricted read only access to data.
- No visitor or guest
accounts should be used. By default the pubs and master database have guest accounts. They
can be disabled by: sp_dropuser guest.
- Login time-out: In SQLadmin -> configure, the time to elapse before login attempts
are cancelled can be set. A setting of 00:01:00 (1 minute) is recommended in general.
- For non integrated user logons (via SQLadmin -> logins, or sp_addlogin):
- Assign new users a default database other than master.
- Assign new users a password (never leave blank) and instruct the user to change this
temporary password a.s.a.p.
When installing SQL server, the following options are recommended:
- Error logging to: Windows NT event log = Yes (i.e. not to a text file)
- Autostart Server at boot time = Yes
- Autostart Monitor at boot time = Yes
- SQL performance monitor integration = Yes, direct response mode.
- Audit level = failed logins + successful logins (Setup -> security options).
The NT event log can be sorted by application, date and priority. It should be
monitored regularly for unusual activity. NT alerts should be used to notify the
administrator of critical conditions.
Views and SPs can serve as security mechanisms. A user can be granted permissions on a
view or stored procedure, even if he/she has no permissions on objects that the view or
procedure accesses. Through a view, users can query and modify only data they can see. The
rest of the database is neither visible, nor accessible.
- Use only NTFS, never FAT.
sp_helpprotect can be used to display an object's permissions. Permissions may
be set on objects and statements.
- Permissions for the
SELECT
UPDATE
INSERT
DELETE
EXECUTE
statements are called object permissions (since they always apply to objects).
Object permissions may be set using the SQL Object Manager -> Object Permissions, or
with grant and revoke.
- Statement permissions are database dependant and apply to the following
statements:
CREATE DATABASE
CREATE DEFAULT
CREATE PROCEDURE
CREATE RULE
CREATE TABLE
CREATE VIEW
DUMP DATABASE
DUMP TRANSACTION
The statement permissions may be set in the SQLadmin -> DB -> Manage -> Users
-> permissions, or with grant and revoke.
no relevant features.
SQL server can communicate with clients via sockets and named pipes. It is preferable
to use named pipes, as SQL server can directly use the NT user account database
(integrated logon), so user accounts on the SQL server do not need to be managed
separately from NT.
Remote server access: A local server may directly access a remote server without having
to logon (sp_addserver,sp_configure 'remote access' 1). The remote server is
effectively controlled by the local server. The mapping of local users to remote usernames
may be achieved by:
- All users keep the same ID in the local & remote servers.
- All remote users are mapped to one local ID.
- Individual users may be mapped to different Ids on the remote server.
The commands sp_addremotelogin, sp_helpremotelogin can be used to
configure/examine remote users.
Trusts: The remote server can trust the local server (no password exchange is
necessary) or he can consider the connection as not to be trusted. Trust can be used
between servers of equal security classification and administrated by the same persons.
To ensure that user authorisation takes place, the option trusted should be
set to FALSE in SQLadmin -> remotes -> manage -> remote logins -> set login ID
-> manage -> remote login options (or via sp_remoteoption).
Communication via named pipes guarantees (weak) encryption of username/password,
whereas sockets do not, so named pipes are preferable.
See also General database availability guidelines.
See also General database availability guidelines.
Set the recovery interval to control maximum time to recover databases after a
crash. This has the effect of setting the time between checkpoints.
- When databases are created/changed using the graphical utilities, it is recommended that
a database script be generated which is capable of recreating the database. The SQL object
manager -> scripts window allows generation of scripts for Tables, views, stored procs,
triggers, rules, defaults, user datatypes and logins for selected objects. This offers a
minimum in documentation and allows regeneration of the objects on another server.
- dbcc (database consistency checker) is the principal tool used for checking
table (checkdb) and database structures for inconsistencies.
- dbcc newalloc checks data and index pages against corresponding extent
structures. It replaces checkalloc (which also exists for backward compatibility) but
doesn't stop if it encounters an error and produces a more detailed report.
Recommendation: run as frequently as possible, but especially before database backups.
- Locks prevent other users from interfering with data being used for an active
transaction. SQL server has many types of locks: exclusive, shared or demand. The command sp_lock
shows current locks. SQL server detects and resolves both deadlocks and livelocks. See
[sql1], page 346.
- If the SQL server must be shutdown, all users should be warned to prevent unnecessary
loss of user data. e.g. a message could be sent to all users connected via Lan Manager:
net send /users "SQL is going down in 30 minutes, please disconnect"
- The SQL monitor should be configured to restart the SQL server if it crashes abnormally.
An entry will be made in the NT event log.
- Mirrors should be monitored, Sybase does not notify when it switches over from original
to mirror. An SQL script using the WAITFOR MIRROREXIT can achieve this, or the
event log can be monitored.
- Monitor transaction log space: sp_apaceused, sp_helpdb or dbcc
sqlperf(logspace).
Only available in V6. TBD.
Mirroring, can prevent continuous operation in the event of disk failure. In addition
to SQL Server mirroring, NT server offers filesystem level mirroring and RAID 5.
Mirroring/RAID may also be implemented at the hardware level. Mirroring affects
performance as well as availability.
- A minimum configuration would mirror the transaction log on a separate drive.
- An advanced configuration would mirror all user databases, the master and the
transaction log on several drives on several controllers.
- If the master is mirrored, the mirror device should be given on the server startup line.
e.g. In SQL Setup -> Options > Set server options -> continue -> parameter add
-rd:\sql\mirror\mastmir.dat. The mirror device is used when the primary device
fails.
- Enable serial writes: write first to the original then to the mirror disk. (SQLadmin
-> devices -> select device -> manage -> mirroring -> mirror). (Check in
practice?)
A new version is available since summer 1995: SQL Server 6.0. This version
offers enhanced security features over the previous version (V4.21):
- V6.0 Supports replication, both asynchronous log-based transaction propagation as well
as "snapshots" of tables and objects. Replication uses a Publisher/Subscriber
metaphor. Each publication can have a security status of unrestricted (default) or
restricted. Restricted status should be used for class databases which are published.
- Logging and alerts are fully integrated with those of NT. If no alerts are defined
locally, the event can be forwarded to another server for processing. This allows groups
of servers to be monitored centrally.
- OLE is supported, allowing use of Visual basic for administration/security scripts.
- Field level security is supported.
- New encryption services provide secure data exchange between clients and servers.
TBD: specific recommendations for V6
- Oracle is a complex DBMS with lots of in-built security (especially V7.1 or later). It's
backup mechanisms are primitive, however.
- Trusted Oracle 7.1 or later is recommended for class . It is not discussed here.
- Use Oracle 7.1 or later (& SQL*net 2.1) for class :
- it has additional security features over 7.0.12: password encryption for remote
connections and parallel recovery.
- If V7.1 security features are required, the following parameter must be set: COMPATIBLE
= 7.1.0
- Operating System authentication was first supported in V7.0.
- To guarantee data & referential integrity, Oracle provide triggers and constraints.
- Prior to V7.1, SQL*net communication between Oracle servers or between server and client
sent passwords and usernames in clear text over the network, meaning that an intruder
could "sniff" the network and see these passwords.
- Utilities such as orapwd and SQLplus accept passwords on the command line,
which are visible in the process table of many systems (and hence to non authorised
users). Administrators must be aware of this!
See "Oracle7 Server Documentation: Addendum Release 7.1", "Oracle7
Server Concepts Manual" delivered with the Oracle product.
Oracle allows user authentication to be carried out by either:
- the OS (usernames must still exist in the database).
- or by Oracle itself. In this case a password is stored for each user (in encrypted form)
in the database.
Both methods may be used within the same database.
Privileged user, prior to V7.1:
- INTERNAL: Only users connecting as INTERNAL can shutdown/start the database. INTERNAL
normally has an addition password to that used by a user. It is not possible to know who
is INTERNAL, if this privilege is shared by multiple administrators. In addition secure
remote administration is not always possible.
Privileged users, V7.1 and later:
- SYSOPER: permits STARTUP, SHUTDOWN, ALTER DATABASE, OPEN/MOUNT, ALTER DATABASE
BACKUP, ARCHIVE LOG AND RECOVER. It also includes the RESTRICTED SESSION privilege. This
privilege is attributed to a user by: GRANT SYSOPER to user_name.
- SYSDBA: contains all ADMIN OPTION privileges, SYSOPER, CREATE DATABASE and
time-based recovery. This privilege is attributed to a user by: GRANT SYSDBA to user_name.
A privileged user can connect via CONNECT user_name/server@my.domain AS
SYSDBA. To check which users have these privileges, use the view V$PWFILE_USERS
.
- V7.0: OSOPER and OSDBA, associated with CONNECT INTERNAL, allow the same privileges as
above. (TBD: sure?)
Oracle provides fine-grained access control through the use of schemas, privileges,
roles, views and table security.
A user's access rights are controlled by the settings in the user's security domain.
The security domain consists of:
- Whether authentication information is maintained by Oracle or the OS.
- The user's default and temporary tablespaces.
- List of tablespaces accessible to the user and the associated quotas.
- The user's limits on system resources.
The privileges and roles which provide the user with access to objects.
Each Oracle database has a list of schemas. Each schema is a collection of
schema objects, such as tables, views, clusters, procedures and packages. Each database
also has a list of valid users and to access a database, the user must identify himself
and be authorised (via a personal password). When the database user is created, a
corresponding schema is also created which govern access to objects in that database. A
user can only connect with a schema of the same name.
A privilege is a right to execute a particular SQL statement (system
privileges) or access a particular object (object privileges). Privileges can be directly
granted to a user or a role (see below). System privileges are attributed via the
SQL commands GRANT/REVOKE or SQL*DBA (Grant system privileges/Roles dialog box). Only
users with the system privilege ADMIN OPTION or GRANT ANY PRIVILEGE can grant/revoke
system privileges to/from users/roles of the database.
Object privileges are also attributed via the SQL commands GRANT/REVOKE. Object privileges
can be granted/revoked by the owner of the schema, or by a user who has been granted the
GRANT OPTION on that schema.
A role is a named group of privileges which can be attributed to users or
other roles. For example, an application can be split up into the following roles: db
Administrator (full privileges), db Operator (backup privileges), Application
Owner (for each db application) and Application User.
Roles offer the advantages of:
- Reduced privilege administration: not user dependant. A role can be attributed to
members of a group easily.
- Dynamic privilege management: if privileges for a group of users must change, only the
role need change.
- Selective availability of privileges: privileges can be selectively made available to a
user as needed.
- Application awareness: applications can query the data dictionary to see what roles
exist.
- Application specific security: roles can be protected by a password and applications can
enable these roles, since it- not users, know the role password.
Roles can be subdivided into application and user roles.
- An Application role is a role that is granted the privileges necessary to run a
particular application. Often, an application will have several roles each allowing
different access to the application. Application roles may be granted to specific users or
other roles.
- A user role, on the other hand, is user for a group of users requiring the same
privileges in the application.
Within a database each role name must be unique and cannot be the same as a username.
Each role has it's own security domain. Each user has the privileges associated with his
security domain, plus the privileges of roles granted to the user (that are currently
enabled).
Recommendation: attribute privileges to specific roles, not users.
Predefined roles in V7: CONNECT, RESOURCE, DBA, EXP_FULL_DATABASE and
IMP_FULL_DATABASE. These roles may be modified.
Each database contains a user group called PUBLIC (to which all users belong).
Member of PUBLIC may see all data dictionary tables prefixed with USER and ALL. Privileges
(system, object privilege or role) can be granted to PUBLIC. Restrictions: tablespace
quotas cannot be assigned PUBLIC. The only objects which can be created as PUBLIC are
links and synonyms (CREATE PUBLIC DATABASE LINK/SYNONYM).
Recommendation: only grant privileges to PUBLIC which are really necessary for ALL
users.
Table security is provided for in two ways:
- DML (Data Manipulation Language) Operations: such as DELETE, INSERT, SELECT and UPDATE
should only be granted to users/roles that need to manipulate a table's data. The INSERT
and UPDATE privileges can be granted for a specific column only. A selective INSERT
, a row can be inserted, but only values of specific columns - all other columns receive
NULL or the column's default value.
- DDL (Data Definition Language) Operations: such as ALTER, INDEX and REFERENCES allow
users to alter or create dependencies on a table, so they should be attributed
conservatively. For certain operations (e.g. create a trigger on a table), a user requires
an (ALTER TABLE) object privilege and a system privilege (CREATE TRIGGER). The REFERENCE
privilege can be attributed per column. It is normally only needed on primary or unique
keys.
Views: To use a view, a user requires only the privilege for the view (not for
the underlying tables/data). This improves security by providing access to only certain
rows/columns in a table. It may be easier (and less error prone) to manage view access
than privileges to the underlying data/columns.
A view can be created if a user has the privilege CREATE (ANY) VIEW and SELECT, INSERT,
UPDATE/DELETE on the underlying base objects. To grant access to this view to other users,
the GRANT OPTION or ADMIN OPTION is needed.
Recommendation: use views for access control.
Packages: can be used to group together procedures. A role/user can be granted
EXECUTE privilege on a package, effectively allowing use of all procedures and public
variables in that package (assuming also that the user has sufficient privileges to access
the data manipulated by the package). Specific EXECUTE privileges cannot be attributed for
a package's constructs.
Note that the system privilege EXECUTE ANY PROCEDURE allows a user to execute any
procedure in the database.
See general database recommendations.
- SQL*net is the network interface that allows Oracle clients to access Oracle servers.
SQL*net supports communication on all major communications protocols (SNA, TCP/IP,
OSI....).
- Oracle supports distributed databases (and hence 2 phase commit), however sites are
administered (i.e. also security) separately. Each site is a distinct data repository.
Schemas in remote databases are accessed by adding a domain address to objects (e.g.
mytable@mydatabase.myserver.ourdomain).
Remote Links:
Remote databases communicate with each other via links. A link is a path to a remote
database and has two components: a database string and a remote account (username &
password). Two types of links exist: Private and (created for the group PUBLIC. Any user
can use this link, there is no way to restrict access). When a user accesses a remote
database via a link, he/she may do so either using the same username/password as locally,
or by using a "central" account for access to the remote database e.g.
CREATE PUBLIC DATABASE LINK remote_db_name
CONNECT TO remote_user_name IDENTIFIED BY remote_user_password
USING `some_db_string';
If the CONNECT TO phase is ommitted, individual accounts are used.
- Central accounts -advantages: easier to administer
- Central accounts -disadvantages: no accurate audit trail is possible on the remote
database, a username & password is specified in the link, the central account might
have more privileges than any one user requires and ANY user in the local database can
access the remote database.
- Individual accounts -advantages: Can be tailored to the users requirements, no
username/password is specified in the link, audit trail is more accurate on the remote
system, user has the same name on both systems.
- Individual accounts -disadvantages: requires more administration time (for account
creating/synchronisation)
Recommendation: Use individual links for class databases.
- The use of a secure password file permits remote database administration over non-secure
network connections. TCP/IP and Decnet are not considered by Oracle to be secure
connections.
- It is possible to remotely login to an Oracle server without entering a password
(allowed by default prior to V7.1) using the special OPS$user logon accounts.
This feature should not be used for class
systems. When used with class or , it should only be used between Oracle servers of the
same security classification and managed by the same administrators.
- In V7.1 and later, one can specify that users accessing the SYS, INTERNAL
schema must connect using a unique name & password. Recommended for class .
- If REMOTE_LOGIN_PASSWORD_FILE is NONE (the default), the user
must be authenticated by the OS. If an Oracle password file exists, it is ignored.
- If REMOTE_LOGIN_PASSWORD_FILE is EXCLUSIVE, Oracle use an externally
stored password file (created using orapwd[2]) for
identification/authentication of privileged users. This provides greater security since it
allows all privileged connections to be audited by username. An EXCLUSIVE password file
can only be used with one database. Recommended.
- If REMOTE_LOGIN_PASSWORD_FILE is SHARED, privileged users must connect
as user SYS with the appropriate password. Not as secure as EXCLUSIVE. Useful for
an administrator who must remotely administer a number of databases.
Depends on the communications protocol used by SQL*net, e.g. TCP/IP, DECnet, SNA
(LU6.2), Appletalk, OSI4, IPX, Named Pipes....
- With V7.1, specify encryption of passwords crossing the network, set ORA_ENCRYPT_LOGIN
= TRUE on the client and DBLINK_ENCRYPT_LOGIN = TRUE on the server.
- Use SQLnet 2.1 (the Oracle client) where possible. SQLnet 2.1 encrypts passwords before
transmitting them to a V7.1 server.
- The orapwd utility itself presents a security risk in that it receives a
password on the command line, which is visible in the process table of many systems.
Administrators must be aware of this!
Not supported by SQLnet or the protocols it uses.
Standard SQL access control.
On large multiuser systems, it is important to be able to set restrictions on the
system resources used by a user. However monitoring of resources normally results in a
slight degradation in performance. It also requires extra sysadmin's time.
A profile is a set of resource limits which can be assigned to a user. Each of
these resources can be managed per session (a session is created each time a user connects
to a database) or per SQL call (each time an SQL statement is executed). When the limits
are reached, the current statement is stopped and the user can either roll back, commit or
disconnect. Resource limits:
- CPU time may be limited per call (in 1/100 sec).
- The logical data block reads (from both memory and disk) per call and per session.
- The number of concurrent sessions per user.
- The idle time (in minutes) per session (the current transaction is rolled back, session
aborted and resources returned to the system).
- The elapsed connect time per session (in minutes).
- The amount of SGA space (used for private SQL areas, in the PGA) can be limited on
multithreaded servers.
Define a minimum number of different profiles and attribute them to users. The more
profiles, the more time it takes to manage them. The best way to estimate limits is to
look at statistics on a live system.
Quotas: Tablespace quotas per user can be use for disk space management. If the
quota for a tablespace is set to zero, a user cannot use any new space, but the existing
space occupied by him remains.
- See also General database recommendations.
- Parallel recovery in V7.1 can reduce downtime when backups must be reloaded.
- Oracle is primitive in that online backups are not possible. Backups normally occur in
two steps: 1). The database is shutdown. 2). An OS level backup of files used by Oracle is
carried out.
- New tools are coming (1Q96) which will allow online database backups. TBD.
- A quasi on-line backup (Oracle call it a fuzzy backup!) is possible with the
following procedure:
- Oracle is online & being actively used.
- Execute ALTER TABLESPACE with the BEGIN BACKUP option. Checkpoints are
no longer noted in file headers. During recovery, this allows Oracle to know when the last
checkpoint was carried out and to do a rollfoward based on the redo log.
- Do a OS backup of the Oracle files (data and control).
- Execute ALTER TABLESPACE with the END BACKUP option. File headers are
updated to the current checkpoint.
- Automatic recovery is possible, but special scripts may have to be written.
- The Import/Export utilities may also be used to transfer data to/from Oracle. Can they
do on-line backups??
Recommendations:
- Operate the database in ARCHIVELOG mode, it provides:
- greater recovery after disk failures
- transaction integrity
- allows quasi "on-line backups" (see above).
- If possible, shutdown Oracle before doing backups.
See also General database recommendations.
Oracle does not offer Mirroring, it must be achieved on the OS, disk or filesystem level.
Replication can increase performance (by reducing remote queries) and availability
(replicated copies are still available if the master dies). The source server contains the
master data and the target server contains a read-only copy of the master data (called a snapshot).Oracle
provides two methods or replicating data from one server to another:
- Asynchronous: tables changes are updated to a read-only snapshot table at regular
intervals.
- Synchronous: triggers can be used to apply changes to replicated copies immediately.
The snapshot can be refreshed via a complete refresh (i.e. all data in
the snapshot is transferred from the master), or a fast refresh (only
changed rows are transmitted). Fast refreshes are only possible on simple snapshots (i.e.
each row in the snapshot corresponds exactly to a row in a single remote table, no
subqueries, joins etc. are allowed) used with a snapshot log (i.e. a table
in the master database which tracks rows changed in the master table).
[1] See "Application Development &
Administration, Tuxedo Release 4.2 ETP" from USL, page TA2-7.
[2] Note that orapwd expects the password for
INTERNAL or SYS on the command line. The command line is visible to other users on a UNIX
system (via ps) when orapwd is running.
IT Security
Cookbook, 21 July, 2000