Muneer's Database Administration Blog

Oracle DBA Knowledge Portal

Tag Archives: Oracle 8i 9i 10g 11g Database Instance Architecture Overview Basics concepts

Oracle Database Architectural Overview

Oracle Architectural Overview

The architecture of Oracle is configured in such a way as to ensure that client requests for data retrieval and modification are satisfied efficiently while maintaining database integrity. The architecture also ensures that, should parts of the system become unavailable, mechanisms of the architecture can be used to recover from such failure and, once again, bring the database to a consistent state, ensuring database integrity. Furthermore, the architecture of Oracle needs to provide this capability to many clients at the same time so performance is a consideration when the architecture is configured.

Oracle Instance - The instance is a combination of a memory structure shared by all clients accessing the data, and a number of background processes that perform actions for the instance as a whole.

The shared memory structure is called the SGA, which stands for System Global Area or Shared Global Area, depending on who you ask. Either term is equally acceptable and the acronym SGA is the most common way to refer to this memory structure.

Oracle also includes a number of background processes that are started when the instance is started. These include the database writer (DBW0), system monitor (SMON), process monitor (PMON), log writer (LGWR), and checkpoint process (CKPT). Depending on the configuration of your instance and your requirements, others may also be started. An example of this is the archiver process (ARC0), which will be started if automatic archiving of log files is turned on.

Oracle Database - The database consists of three types of files. Datafiles, of which there can be many depending on the requirements of the database, are used to store the data that users query and modify. The control file is a set of one or more files that keeps information about the status of the database and the data and log files that make it up. The redo log files are used to store a chronological record of changes to the data in the datafiles.

User Process - The user process is any application, either on the same computer as the database or on another computer across a network that can be used to query the database. For example, one of the standard Oracle query tools is SQL*Plus—a user process. Another example of a user process is Microsoft Excel or an Oracle Financials application. Any application that makes use of the database to query and modify data in the database is considered a user process. The user process does not have to come from Oracle Corporation; it only needs to make use of the Oracle database.

Server Process - The server process is a process launched when a user process makes a connection to the instance. The server process resides on the same computer as the instance and database and performs all of the work that the user process requests. As you will find out in more detail later in this chapter, the server process receives requests from the user process in the form of SQL commands, checks their syntax, executes the statements and returns the data to the user process. In a typical Oracle configuration, each user process will have a corresponding server process on the Oracle server to perform all the work on its behalf.

Oracle Instance - As shown earlier in Figure 1-2, the Oracle instance is made up of a shared memory structure (the SGA), which is composed of a number of distinct memory areas. The other part of the instance is the set of background processes, both required and optional, that perform work on the database.

The instance is always associated with one, and only one, database. This means that when an instance is started, the DB_NAME parameter in the Oracle parameter (INIT.ORA) file specifies which database the instance will be connected to, while the INSTANCE_NAME parameter (which defaults to the value of the DB_NAME parameter) specifies the name of the instance. The configuration of the instance is always performed through parameters specified in the INIT.ORA file and one environment variable—ORACLE_SID—which is used to determine which instance to start and perform configuration operations on when on the same server as the database.

One of the main objectives of the instance is to ensure that connections by multiple users to access database data are handled as efficiently as possible. One way it accomplishes this is by holding information in the datafiles in one of its shared memory structures—the database buffer cache—to allow multiple users reading the same data to retrieve that data from memory instead of disk since access to memory is about a thousand times quicker than access to a disk file.

Another reason that the instance is important is that, when multiple users access Oracle data, allowing more than one to make changes to the same data can cause data corruption and cause the integrity of the data to become suspect. The instance facilitates locking and the ability for several users to access data at the same time.

Note: It is important to remember that a user process, when attempting to access data in the database does not connect to the database but to the instance. When specifying what to connect to from a user process, you always specify the name of the instance and not the name of the database. The instance, in this way, is the gatekeeper to the database. It provides the interface to the database without allowing a user to actually touch the various files that make up the database.

System Global Area (SGA) - The SGA is a shared memory structure that is accessed by all processes in order to perform database activity, such as read and write data, log changes to the log files, and keep track of frequently executed code and data dictionary objects. The SGA is allocated memory from the operating system on which the instance is started, but the memory that is allocated to it is managed by various Oracle processes. The SGA is composed of several specific memory structures, as shown earlier in Figure 1-2.

These include:

Shared Pool - The Shared Pool is an area of SGA memory whose size is specified by the INIT.ORA parameter SHARED_POOL_SIZE. The default value for SHARED_POOL_SIZE is 3,000,000 bytes, or just under 3MB in versions of Oracle prior to 8.1.7, and 8,000KB bytes, or just under 8MB in Oracle 8.1.7. The size of the shared pool remains constant while the instance is running and can only be changed by shutting down and restarting the instance, after modifying the value in the INIT.ORA file.

The shared pool is divided into two main areas of memory – the data dictionary cache (also called the dictionary cache or row cache) and the library cache. The data dictionary cache is used to store a cached copy of information on frequently accessed data dictionary objects. The information cached includes the name of the object, permissions granted on the object, dependency information, and so on. The data dictionary cache also includes information on the files that make up the database and what tablespaces they belong to, as well as other important information.

When a server process needs to determine what the name “Students” refers to, it queries the data dictionary cache for that information and, if the information cannot be found, it reads the information from the datafile where the data dictionary is located and then places it in a cache for others to read. The information in the cache is stored using a least-recently-used (or LRU) algorithm. This means that information that is frequently requested remains in the cache while information that is only occasionally required is brought into the cache and flushed out if space is required to bring other information in.

You cannot manually size the data dictionary cache—Oracle does this dynamically and automatically. If more memory is required to cache data dictionary information, which may be the case in a database with many objects, the cache is made larger to accommodate the requests. If the memory is needed by the library cache component of the shared pool, some memory may be freed up and removed from the data dictionary cache.

The other major component of the shared pool is the library cache. The library cache is used to store frequently executed SQL statements and PL/SQL program units such as stored procedures and packages. Storing the parsed statement along with the execution plan for the commands sent to the server in memory allows other users executing the same statement (that is, identical in every way including case of statement text, spaces and punctuation) to reduce the work required by not having to re-parse the code. This improves performance and allows the code to appear to run quicker.

The library cache is broken up into a series of memory structures called
shared SQL areas that store three elements of every command sent to the server: the text of the SQL statement or anonymous PL/SQL block itself, the parse tree or compiled version of the statement, and the execution plan for the statement that outlines the steps to be followed to perform the actions for the SQL statement of PL/SQL block.

Each shared SQL area is assigned a unique value that is based upon the hash calculated by Oracle from the text of the statement, the letters and case thereof in it, spacing, and other factors. Identical statements always hash out to the same value whereas different statements, even returning the same result, hash out to different values. For example, the following two statements use two shared SQL areas because their case is different, though they both return the same information:

SELECT * FROM DBA_USERS;
select * from dba_users;

One of the goals for ensuring good performance of the applications accessing data in the database is to share SQL areas by ensuring that the statement returning the same result are identical, thereby allowing each subsequent execution following the first of the same SQL statement to use the execution plan created the first time a command is run. The preceding two statements are considered inefficient because they would need to allocate two shared SQL areas to return the same results. This would consume more memory in the shared pool (once for each statement), as well as cause the server process to build the execution plan each time. Like the data dictionary cache, the library cache also works on an LRU algorithm that ensures that statements that are frequently executed by users remain in the cache while those executed infrequently or just once be aged out when space is required. Also like the data dictionary cache, you cannot specifically size the library cache—Oracle sizes it automatically based upon the requirements of the users and statements sent to the server, as well as memory allocated to the shared pool with the SHARED_POOL_SIZE parameter.

Database Buffer Cache - The database buffer cache is used to store the most recently used blocks from the datafiles in memory. Because Oracle does not allow a server process to read data from the database directly before returning it to the user process, the server process always checks to see if a block it needs to read is in the database buffer cache, and, if so, retrieve it from the cache and return the rows required to the user. If the block the server process needs to read is not in the database buffer cache, it reads the block from the datafile and places it in the cache.

The database buffer cache also uses an LRU algorithm to determine which blocks should be kept in memory and which can be flushed out. The type of access can also have an impact on how long a block from the datafile is kept in the cache. If the situation where a block is placed in the cache as a result of an index lookup, the block is placed higher in the list of blocks to be kept in the cache than if it were retrieved as a result of a full table scan, where every block of the table being queried is read. Both placing the datafile blocks in the database buffer cache in the first place, and their importance in being kept there for a long or short period, is designed to ensure that frequently accessed blocks remain in the cache.

The database buffer cache is sized by a couple of Oracle initialization parameters. The INIT.ORA parameter DB_BLOCK_SIZE determines the size, in bytes, of each block in the database buffer cache and each block in the datafile. The value for this parameter is determined when the database is created and cannot be changed. Essentially, each block in the database buffer cache is exactly the same size, in bytes, as each database block in the datafiles. This makes it easy to bring datafile blocks into the database buffer cache—they are the same size. The default for DB_BLOCK_SIZE is 2048KB – too small in almost all cases.

The other INIT.ORA parameter that is used to determine the size of the database buffer cache is DB_BLOCK_BUFFERS. This parameter defaults to 50, which is also its minimum value. The total amount of memory that will be used for the database buffer cache is DB_BLOCK_BUFFERS * DB_BLOCK_SIZE. For example, setting DB_BLOCK_BUFFERS to 2,000 when the DB_BLOCK_SIZE is 8,192, allocates 2000×8192 or 16MB of RAM for the database buffer cache. When sizing the database buffer cache, you need to consider the amount of physical memory available on the server and what the database is being used for. If users of the database are going to make use of full table scans, you may be able to have a smaller database buffer cache than if they frequently accessed the data with index lookups. The right number is always the balance between a sufficiently high number so that physical reads of the datafiles are minimized and not too high a number so that memory problems take place at the operating system level.

Redo Log Buffer - Before a change to a row or the addition or removal of a row from a table in the database is recorded to the datafiles, it is recorded to the redo log files and, even before that, to the redo log buffer. The redo log buffer is essentially a temporary storage area for information to be written to the redo log files.

When a server process needs to insert a row into a table, change a row, or delete a row, it first records the change in the redo log buffer so that a chronological record of changes made to the database can be kept. The information and the size of the information, that will be written to the redo log buffer, and then to the redo log files, depends upon the type of operation being performed. On an INSERT, the entire row is written to the redo log buffer because none of the data already exists in the database, and all of it will be needed in case of recovery. When an UPDATE takes place, only the changed columns’ values is written to the redo log buffer—not the entire row. If a DELETE is taking place, then only the ROWID (unique internal identifier for the row) is written to the redo log buffer, along with the operation being performed. The whole point of the redo log buffer is to hold this information until it can be written to the redo log file. The redo log buffer is sized by the INIT.ORA parameter LOG_BUFFER. The default size depends on the operating system that is being used but is typically four times the largest operating system block size supported by the host operating system. It is generally recommended that the LOG_BUFFER parameter be set to 64KB in most environments since transactions are generally short.

The redo log buffer is a circular buffer, which means that entries have been written to the redo log file and the space occupied by those entries can be used by other transactions. This is possible because the log writer (LGWR) background process flushes the contents of the redo log buffer to the redo log files whenever a commit occurs or whenever any one transaction occupies more than one-third of the space in the buffer. This essentially means that the redo log files are the most write-intensive files in a database and that the redo log buffer can also be kept relatively small and still be efficient.

While the database buffer cache, shared pool, and redo log buffer are a required part of the SGA, the SGA also may have additional shared memory areas, based upon the configuration of the instance. Two of the most common additional shared memory structures are the large pool and the Java pool.

The large pool is sized by the INIT.ORA parameter LARGE_POOL_SIZE whose minimum value is 0 but may actually allocate a different amount of memory based upon the values of other INIT.ORA parameters such as PARALLEL_AUTOMATIC_TUNING. The large pool is used for memory structures that are not directly related to the processing of SQL statements. An example of this is for holding blocks in memory when a backup or restore operation through Oracle Recovery Manager (RMAN) is taking place. Another use of the large pool is for sort space in a multi-threaded server (MTS) environment.

The Java pool is used to store Java code and its execution plan. It is used for Java stored procedures and functions and other classes that you have created that will be run in the Java virtual machine (JVM) that resides on the Oracle server. The minimum value for JAVA_POOL_SIZE is 0, but Oracle will always allocate a minimum of 32,768 bytes in Oracle 8.1.6 or higher (in Oracle 8.1.5 the minimum value was 1,000,000 bytes and setting the parameter JAVA_POOL_SIZE to a value less than that would generate an error). The default value, if not set in the INIT.ORA file, is 20,000KB or 20MB.

Background processes - Aside from the shared memory structures, the SGA also includes a number of background processes that perform actions that deal with database operations for all users. In detailing these, you need to distinguish between required and optional background processes.Oracle8i has five required background processes. If any of these processes are killed or terminate for any reason, the instance is considered to have crashed and instance recovery (that is, stopping and re-starting the instance), and even database recovery may be needed. The required background processes, as shown earlier in Figure 1-2, are the following:

SMON - System Monitor (SMON) is a background process that does exactly what you would expect—it monitors the health of the instance as a whole and ensures that the data in the database is consistent. To ensure database and instance integrity, when you start the instance before the database is opened and available to users, SMON rolls forward any committed transactions found in the redo log files and rolls back any uncommitted transactions. Because all changes to the data in the database are recorded to the redo log buffer and then the redo log files, this means that anything that has taken place before the instance crashed is properly recorded to the datafiles. Once this is completed, the database will be opened and its data available to users for querying and modification.

Note: During instance startup SMON only actually performs a roll forward before the database is opened. Rollback of uncommitted transactions takes place after the database is opened and before users access any data blocks requiring recovery. This is known as “delayed transaction rollback” and is there to ensure that the database can be opened as quickly as possible so that users can get at the data. Before any data that was not committed prior to the instance going down is queried or modified, SMON ensures that a rollback takes place to bring it to a consistent state.

SMON also does some clean-up work by coalescing any free space in the datafiles to make it contiguous. When a table, index, or other object that requires storage is dropped or truncated, this frees up the space that was previously used by that object. Because a single object could be made up on many sets of database blocks called extents, and these extents could be of different sizes, SMON coalesces (or combines) these extents into larger chunks of free disk space in the datafiles so that they may be allocated to other objects, if needed. The reason for this is that if the extents were left at their original size, a CREATE TABLE statement may fail if it cannot find an extent of the size it requested—while free space would still exist in the datafile.

As well as coalescing free space, SMON also de-allocates temporary segments in datafiles that belong to permanent tablespaces to ensure that they do not occupy space required by tables, indexes, or other permanent objects. Temporary segments are created when a sort being performed cannot completely be performed in memory and disk space is needed as a temporary storage area. When this disk space is created on a tablespace holding other permanent objects such as tables, indexes, and the like, SMON needs to get rid of the temporary segment as quickly as possible to ensure that a table or index does not run out of disk space if the space is needed.

PMON - Process Monitor (PMON) is a required background process that does exactly what the name says—it monitors server and background processes to ensure that they are operating properly and have not hung or been terminated. If a server process dies unexpectedly, it is up to PMON to rollback any transaction that the process was in the middle of, release any locks that the process held, and release any other resources (such as latches) that the process may have held. When an Oracle server process dies, these actions are not automatically performed—it is up to PMON to do so.

You should note that PMON might not perform these actions immediately after the process has terminated. In the first place, how does PMON know for sure that a process is hung and not just sitting idle? For example, a user could be connected to the instance through SQL*Plus and then decide to go for lunch while still connected. Should his or her process be terminated? Perhaps, but generally PMON cannot make that decision. It waits for a clue that the process is no longer doing even simple things, such as communicating its presence. The time PMON may wait to determine this can be quite lengthy and cause others to be locked out from portions of the database. PMON will eventually safely rollback the transaction, release locks, and clean up other resources. In other words – you may have to wait a while.

DBW0 - The Database Writer (DBW0) is a background process that performs a very specific task—it writes changed data blocks (also known as dirty buffers) from the database buffer cache to the datafiles. Whenever a change is made to data on a block in the database buffer cache, the buffer where the change was made is flagged for writes to the datafile. The database writer process, of which there could be several, writes the changed blocks to the datafiles whenever a checkpoint occurs, or at other pre-defined intervals. DBW0 does not write to the datafiles with the same frequency that LGWR writes to the redo log files. The main reason for this is that the minimum size of a write to the datafiles is DB_BLOCK_SIZE, which is at least 2,048 bytes. LGWR writes to the redo log files no matter how much information needs to be recorded, and it could be as little as 30 bytes, if only a single change of a small column is made in a transaction. Therefore DBW0 writes are more expensive, in terms of disk I/O than LGWR writes. Because DBW0 writes are more expensive, they are bunched and take place when one of the following takes place:

• The number of dirty buffers exceeds a pre-defined threshold. The threshold level is different for each operating system and dependent on the values of other INIT.ORA parameters but essentially is designed to ensure that a process can always find a clean (that is, stable and already written to the hard disk) buffer in the cache to write its information to. It is also preferable that the clean buffer be cold—not accessed for a longer period of time and therefore lower on the LRU list of buffers.

• A checkpoint has taken place. A checkpoint can be initiated manually by the DBA issuing the command ALTER SYSTEM CHECKPOINT or automatically by a redo log file group being filled up and LGWR needing to switch to a different redo log file group. In either case, a checkpoint forces DBW0 to write all dirty buffers to disk to ensure that at the time a checkpoint takes place all the datafiles are consistent with each other. A checkpoint also causes the CKPT process to update all datafile headers and the control files to ensure that the information is consistent across the database.

• A server process cannot find a free buffer. Each time a server process needs to bring data into the database buffer cache, it scans the LRU list of buffers to determine if a clean free block exists. If it cannot find one after scanning a pre-defined number of buffers, this will trigger DBW0 to write all dirty buffers to the datafiles and thereby create clean buffers that the process can use for its information.

• A timeout occurs. Oracle ensures that a write takes place to the datafiles every three seconds, if needed so that the datafiles do not become too out-of-sync with the information in the redo log files and the database buffer cache. It also helps to ensure that changes to the data are written to the database ahead of checkpoints so that less information needs to be written to the disk at that point in time.

In a typical Oracle8i instance, the number of database writers is set to one and others are not started. If you have many hard disks and datafiles in your database, you may want to increase the number of database writer processes. The Oracle initialization parameter DB_WRITER_PROCESSES is used to configure the number of database writers. Its default value is 1, which starts a process called DBW0. Increasing the value of DB_WRITER_PROCESSES starts additional database writers up to a maximum of 10 – DBW1 to DBW9. You should only use more database writers if your system is write-intensive and you have datafiles on several disks.

LGWR - The Log Writer process, as mentioned previously, writes data from the redo log buffer to the redo log files at defined thresholds. It performs this whenever an implicit or explicit commit occurs; when more than 1MB of data exists in the redo log buffer; when the log buffer is more than one-third full with data from a transaction (that is, when a single transaction occupies a large portion of the redo log buffer); when DBW0 needs to write to the datafiles, forcing LGWR to write out changes from the redo log buffer to the redo log files beforehand; or when three seconds have elapsed since the last LGWR write. For any instance there is only one LGWR process and this number cannot be changed. Of the times that LGWR writes to the redo log files, the most common reason for doing so is that a commit has taken place. The most unlikely reason for an LGWR write is that the redo log buffer contains more than 1MB of changes. Very few databases have a need for a large redo log buffer and having one that is extremely large can leave you open to losing a large number of changes and being unable to fully recover the database in case of instance failure.

One important element to keep in mind is what actually happens when a transaction commits. When the user issues the COMMIT statement to commit the transaction, LGWR is instructed to flush data in the redo log buffer to the redo log files. If for any reason LGWR is unable to write to the redo log files, the transaction cannot commit and will be rolled back. Oracle requires that a physical record (that is, a write to a disk file) exist in order for the transaction to be considered committed. If the physical record cannot be created (that is, a write to the redo log files cannot take place because the redo log file is unavailable due to a disk crash or other occurrence), the commit cannot complete and the user is notified that the transaction was aborted.

CKPT - The Checkpoint process is responsible for one thing—updating control files and datafiles whenever a checkpoint takes place. Checkpoints can take place when a DBA issues the command ALTER SYSTEM CHECKPOINT, when a redo log file group fills up and the LGWR initiates the checkpoint, or when the values specified by the INIT.ORA parameters LOG_CHECKPOINT_INTERNAL, LOG_CHECKPOINT_TIMEOUT, or FAST_START_IO_TARGET are exceeded. Prior to Oracle8 the CKPT process was optional and not required. Whenever a checkpoint occurred, LGWR would update datafile headers and control files. However, this would also mean that LGWR could not write to the redo log files at the same time as it was performing checkpoint updates, which slowed the system down. For this reason, the CKPT process was made a required background process in Oracle8.

The required background processes, along with the SGA, provide the basic functionality required for an Oracle instance to operate. However, depending on the configuration of your database and what Oracle options you are using, additional background processes can also be started. Some of the more common optional background processes include:

ARC0 - The Archiver Process is used to write redo log files that have been filled and switched from to one or more archive log destinations. In Oracle8i, unlike previous versions, you can configure multiple archiver processes using the INIT.ORA parameter LOG_ARCHIVE_MAX_PROCESSES, which defaults to one.

If you specify a large number of archive log destinations and have configured archiving for the instance, having more than one process can improve performance.

Snnn - The Shared Server Process is used in a multi-threaded server (MTS) environment to process requests from database users. Unlike in a typical dedicated server configuration where each user process is assigned a dedicated server process to perform work on its behalf, a multi-threaded server configuration shares the server processes among all the user processes, thereby making the use of resources on the computer more efficient. The number of shared server processes is configured by the INIT.ORA parameters MTS_SERVERS (which defaults to 0) and MTS_MAX_SERVERS (which defaults to 20). If MTS_SERVERS is set to 0, the multi-threaded server is not configured on the instance and no shared server processes are launched.

Dnnn - The Dispatcher Process is also used in an MTS configuration. When a request to initiate an MTS connection to the instance is received from a user process, the dispatcher is the one that is assigned to the process. The same dispatcher can be used to service requests from many users and passes those requests to a queue where they are picked up by a shared server process and executed. The results are then placed in a queue for the dispatcher that requested it. The dispatcher picks up the results and transmits them to the client. The configuration of dispatchers is performed by setting the INIT.ORA parameters MTS_DISPATCHERS (default of 0) and MTS_MAX_DISPATCHERS (default of 5).

LCK0 - The Lock Process provides inter-instance locking between nodes in an Oracle Parallel Server environment. When using Oracle Parallel Server, more than one instance can access the same database. When users connected to one instance need to allocate locks to change data in the database, the LCK0 process ensures that a user connected to another instance does not already have the lock before allowing the requesting user to get it. This is done to ensure that data in the database remains consistent at all times, even when accessed by more than one instance.

RECO - The Recoverer Process is used in distributed database environments and only when the DISTRIBUTED_TRANSACTIONS Oracle initialization parameter is set to a value higher than zero, the default. In this situation it will be started automatically and will be responsible for resolving any failed distributed transactions with data residing on other nodes in the distributed database configuration. Essentially it makes sure that all databases in a distributed transaction are in a consistent state and that a distributed transaction does not commit on one node while not on another.

Oracle database - An Oracle instance by itself only provides the mechanism to access the database. If the database is not created, having the instance started allows you to create it. However, to get any real use out of the instance, the database must exist. The database is where the data is stored—both the metadata (data about data, also known as the data dictionary) and user data, such as the Orders table or Customers table or LastName index.

An Oracle database is composed of a series of operating system disk files, as shown in Figure 1-3. The three key types of files that Oracle uses are datafiles, control files, and redo log files. Each file type is required for a database to operate properly and the loss of one or more of the files belonging to the database usually requires that recovery be initiated.

Datafiles - Oracle datafiles contain the actual information that the database stores. When you create a table or index, it is stored in a datafile (a physical element of storage) that belongs to a tablespace (a logical element of storage). Datafiles store anything that is considered a segment in Oracle, such as tables, indexes, clusters, partitions, large objects (LOBs), index organized tables (IOTs), rollback segments, and temporary segments. Anything that requires storage in the database—whether created by the user or by Oracle itself—is stored in datafiles. In fact, even the configuration of the database itself and the datafiles, redo log files, tables, indexes, stored procedures, and other objects that exist in the database are stored in a datafile.

Datafiles in Oracle8i have certain characteristics, such as:

✦ A datafile can only be associated with one database. It is not possible to create a datafile that will be part of two databases at the same time. You can, when using Oracle Parallel Server, have two instances access the same databases and datafile.

✦ A datafile belongs to a logical storage element called a tablespace. A single datafile can only be associated with one tablespace and will only store data that is configured to reside on that tablespace.

✦ Datafiles can be configured to have a fixed size, or they can have an attribute set to allow them to grow, should no free space within the datafile be found. If you configure a datafile to autogrow, you can also configure a maximum size for the datafile to grow to, or set no limit (no recommended).

✦ Datafiles are organized internally into database blocks of the same size as the value of the DB_BLOCK_SIZE parameter. Each unit of storage inside the datafile is of DB_BLOCK_SIZE.

✦ Datafiles can be read by any server process in order to place blocks of data in the database buffer cache. Datafiles are normally written to only by the DBW0 process to minimize the possibility of corruption.

Redo log files - Redo log files contain a chronological record of all changes that have been made to the database. They are written to by the LGWR process and operate in a circular fashion, which means that when one redo log file fills up, it is closed and another redo log file is opened for writes. When the second one fills up it is closed and the first, or another redo log file, is opened for writes, and so on. Each Oracle database must have at least two redo log file groups with one redo log file per group, or the database will cease to allow changes to the data.

Control files - When an Oracle instance is started, one of the first files opened and read is the control file. The control file contains information about what files makeup the database and when the last bit of information was written to them. If the information in the control file and one of the datafiles or redo log files does not match, the instance cannot be opened because the database is considered suspect. This means that some sort of recovery may be required and you will need to deal with it. The location and number of control files is set by the INIT.ORA parameter CONTROL_FILES. A database must have at least one control file, although two or more are recommended. If a control file cannot be opened for any reason, the instance cannot start and the database will not be accessible.

Other key Oracle files - While datafiles, redo log files, and control files make up the Oracle database, other files are also required to make the instance work properly, to determine who is allowed to start and stop the instance, and to ensure good recoverability. The files available for this purpose, as shown in Figure 1-3, are the parameter (or INIT.ORA) file, the password file, and the archived redo log files.

Parameter file - The Oracle parameter file, also known as the INIT.ORA file, contains the parameters and values that are used to start and configure an Oracle instance. The parameter file may also contain settings for parameters that determine how Oracle behaves in processing a query. An example of the latter is the OPTIMIZER_MODE parameter, which can determine whether Oracle should use statistics in calculating the execution plan for a specific query. The parameter file can be stored in any location on the hard drive of the computer where the instance will be started and can have any name. The default location for the parameter file is operating system dependent, although the name always defaults to INITSID.ORA, where SID represents the name of the instance it will be used for.

Password file - In order for a user to be able to start and stop the instance, special privileges called SYSDBA or SYSOPER are required. The password file, which is created by using the Oracle Database Configuration Assistant or the ORAPWD utility, lists the users that have been granted one or both of the privileges mentioned. When a user issues the STARTUP or SHUTDOWN command, the password file is checked to ensure that the user has the appropriate privileges.

In order to add users to the password file, the INIT.ORA parameter REMOTE_LOGIN_PASSWORD_FILE must be set to EXCLUSIVE. The default for this parameter is SHARED and does not allow you to add users to the file.

Archived redo log files - Archived redo log files are copies of full redo log files that have been closed by LGWR and are created by the ARC0 process when automatic archiving is turned on and the database is in ARCHIVELOG mode. An Oracle database by default runs in NOARCHIVELOG mode, which means that as redo log files become full, they are not copied or backed up in any way. If the redo log files need to be written to again because other log files have also become full and a log switch has taken place, the information in the previously full log is overwritten with no record of the changes within it ever being recorded. From a recovery standpoint, running a database in NOARCHIVELOG mode essentially means that if anything goes wrong, you must restore your last full database backup and re-enter any data that change since then.

When the database is in ARCHIVELOG mode, Oracle does not allow a redo log file to be overwritten until it is archived, that is, copied to a different location, by the ARC0 process. Until the redo log file is archived it cannot be written to. If the ARC0 process is not started indicating that automatic archiving has not been configured by using the LOG_ARCHIVE_START, and LOG_ARCHIVE_DEST_n Oracle initialization parameters, users will not be able to make changes to the database until archiving takes place.

Having an archived copy of the redo log file enables you to perform data recovery up to the point of failure, when combined with the current redo log file. ARCHIVELOG mode also allows you to recover the database to a specific point in time or a particular transaction, which provides flexibility from disastrous user error, such as someone issuing the command DELETE FROM CUSTOMERS and then committing the transaction. An archived redo log file is simply an image copy of a redo log file that has been completely filled by LGWR and a log switch to another log file group has taken place. However, having a copy of that redo log file ensures that your critical data can be recovered with more flexibility.

Processing SQL Statements –

Connecting to an instance: In order to submit SQL statements for processing to Oracle, you must first establish a connection to the instance. The process of doing so, as shown in Figure 1-4, may include a number of components, such as the user process, Net8, the physical network, the server computer, server process, the instance and, finally, the database.

Note: While a network connection between the client (user process) and the server is not required, this is the more typical configuration. It is also possible to have both the user process and server process on the same computer, as would be the case if you were running SQL*Plus on the same machine as the database. With the exception of the network connection, all other elements of the way requests are sent to the server and the results returned to the user process remain the same.

The process of connecting to an instance is initiated when a user starts an application on the client computer that makes use of data in an Oracle database. By specifying a connect string (username, password, and the instance to connect to), the user instructs the client application to start the process of establishing a connection to the Oracle instance (not the database).

After the user has provided a username and password, as well as the name of the instance to connect to, the Net8 client component on the client computer attempts to resolve the name of the instance in any of a number of ways that have been configured for the client. One of these methods is to use a file on the local client computer called TNSNAMES.ORA to lookup the instance name and determine which machine the instance resides on and the network protocol that needs to be used to communicate with it. Another method is to contact an Oracle Names server to determine the same information. Whichever way is configured for Net8 to use, the process of determining the location of the instance will be transparent to the user process, unless an error is encountered.

Once Net8 has determined on which computer the instance to be connected to resides, it sends a connection request with the username and password along the network using the protocol configured to communicate to that computer. On the server computer, the listener process of Net8 receives the request and launches a dedicated server process on behalf of the user (or connect the client to a dispatcher, in an MTS connection request) and verify the username and password. If the username and password are not correct, the user is returned an error and needs to try again; if correct, the user is connected to the instance. At this point, the communication between the user process and the server process is direct and the listener is no longer involved—until another connection attempt is made.

In processing of client requests, the server process receives the request from the user process and, in the processing of the SQL statement, makes use of an area of memory allocated explicitly to it—the Process Global Area (or Program Global Area) or PGA for short. The PGA is private to each server process launched on the server and is used to allocate memory used to perform sorts, store session information (such as the username and associated privileges for the users), keep track of the state of various cursors used by the session, and stack space for keeping track of the values of variables and the execution of PL/SQL code for the session. In a multi-threaded server (MTS) environment, some of this information is kept in the large pool because server processes are shared in an MTS configuration, but a typical dedicated server environment allocates a PGA when a server process starts and de-allocates it when the process terminates.

Statement and transaction processing: The user and server processes, the PGA, and the SGA, along with the other processes that make up the Oracle instance, all work together when a SQL statement is sent to the Oracle server to query or update data. When a user issues any SELECT, INSERT, UPDATE, or DELETE statement, Oracle must go through several steps to process these queries. Consider the processing of the following statement:

UPDATE Courses
SET RetailPrice = 1900
WHERE CourseID = 101;

When this statement is executed, Oracle goes through the following steps. Oracle first parses the statement to make sure that it is syntactically correct. The parse phase is typically done once for any SQL statement and will not need to be performed again if the same statement is executed by any user, even the same one that sent it across in the first phase. Oracle always attempts to minimize the amount of parsing that needs to be performed because it is quite CPU-intensive, and having to parse many statements increases the amount of work that needs to be performed by the server process.

During this parse phase, Oracle (that is, the server process) first determines whether the statement is syntactically correct. If not, an error is returned to the user and no further work is performed; if so, Oracle next determines whether the objects that are referenced (in this case, the Courses table) are available for the user and whether the user has permission to access the object and perform the required task (that is, the UPDATE). It does this by locating information about the object in the data dictionary cache or, if this information is not in cache, by reading the information from the datafiles where the data dictionary resides and placing it in the cache. By placing information about the object in the cache, it ensures that future requests for the object are performed quicker, in case other users are also referencing the Courses table. If the user does not have permissions or the object does not exist, an error is returned to the user.

When the object is located and the user has permissions, the next element of the parse phase is to apply parse locks on the objects being referenced by the statement (the Courses table) to ensure that no one makes a structural change to the object while it is being used, or drops the object. The server process next checks whether the statement has been previously executed by anyone by calculating a unique hash value for the statement and checking the shared pool to see if the shared SQL areas contain the hash value calculated. If so, then Oracle does not need to build the execution plan (the series of tasks to be performed to satisfy the query). It can simply keep the execution plan that was previously created and use it in the next phase of processing. If it cannot find the execution plan, indicating this is the first time the statement is being run, or the statement is no longer in the shared pool and has been aged out, Oracle then builds the execution plan and places it in the shared SQL area in the shared pool. Oracle then proceeds to the execute phase of processing.

During the execute phase, Oracle runs the execution plan in the shared pool and performs whatever tasks are contained therein. This includes locating the relevant blocks of data in the Database Buffer Cache, or, if they are not in the cache, the server process reads the datafiles where the data resides and loads the data blocks into the Database Buffer Cache within the SGA. The server process then places a lock on the data being modified (in this case, the row containing course 101). This lock prevents other users from updating the row at the same time you are updating it. Oracle then updates a rollback segment block and a data segment block in the database buffer cache, and records these changes in the redo log buffer. It places the data in the row prior to the update in the rollback block and the new value in the data block.

The Rollback Segment is used for two purposes:
✦ Read consistency: Until the change is committed, any user who executes a query for the retail price of course 101 sees the price prior to the upgrade. The new value is not visible until the update is committed.
✦ If the system crashes before the transaction is committed, or if the user issues an explicit ROLLBACK command, the data in the rollback segment can be used to return the row to its initial state.

The final phase of processing is the fetch phase. For a SELECT statement, the fetch phase of processing returns the actual data to the user, and it is displayed in SQL*Plus, or the user process that made the request. For an UPDATE operation, or any data manipulation language (DML) statement, the fetch phase simply notifies the user that the requisite number of rows has been updated.

When other statements are part of the same transaction, the same series of steps (that is, parse, execute, and fetch) take place for each statement until the user issues a COMMIT or ROLLBACK statement. When the transaction is committed or rolled back, Oracle ensures that all information in the redo log buffer pertaining to the transaction is written to the redo log files, in the case of a COMMIT, or the data blocks are restored to their previous state, in the case of a ROLLBACK, and removes all locks. Oracle also erases the values held in the rollback segment. This means that once a transaction is committed, it is no longer possible to roll it back, except by performing a database restore and recovery.

***********************************************************

References:

Oracle8i DBA Architecture And Administration Certification Bible – HungryMinds

Follow

Get every new post delivered to your Inbox.

Join 64 other followers

%d bloggers like this: