Virtual Processors and Threads 03-2003 7-1
© 2001, 2003 International Business Machines Corporation
Virtual Processors and Threads
Module 7
7-2 Virtual Processors and Threads
Objectives
7-2
At the end of this module, you will be able to:
w Define a thread
w Describe the multithreaded architecture
w Describe how the virtual processors are implemented in UNIX
w Use onstat to monitor VPs and threads
w List and explain the virtual processor classes
w Describe how network connections are handled by the server
w Set server configuration parameters related to VPs and threads
w Dynamically add and remove virtual processors
Virtual Processors and Threads 7-3
Before looking at server architecture, it is helpful to first examine the general concept of a
thread.
A thread can be thought of as a sequence of instructions being executed in a program. When
multiple threads are run within the same entity (in our case an entity is a process), it is called
multithreading.
A thread is sometimes referred to as a lightweight process in some literature because it makes
fewer demands on the operating system.
In the following pages, multithreading is explained by comparing the relationship between a
thread and a process to the relationship between a process and an operating system.
What is a Thread?
7-3
A thread is a sequence of instructions executed in a
program.
CPU
oninit
A thread is a sequence of instructions executed in a program.
7-4 Virtual Processors and Threads
A regular UNIX process that does not implement threads can be thought of as a single-threaded
process, although we do not normally call it that. One sequence of instructions is being executed
for this process, and the operating system is responsible for scheduling and running the process.
Multithreading is a method of executing many iterations of a process for different users without
having to form many instances of that process at the operating-system level.
A multithreaded process can have multiple threads running within a UNIX process, each
running sequentially and giving up control to other threads at a specific point in time. How this
is accomplished makes multithreading different from simply receiving and executing requests
from a single user’s process.
Multithreading is a systems level concept whereby the program actually executes machine-level
instructions to manipulate the process so that it executes for many users instead of just one user.
The program executes these instructions entirely at the user level, not at the UNIX kernel level.
As far as UNIX is concerned, this multithreaded process is a single process just like any other
process.
Single Threaded Vs. Multithreaded
7-4
One process,
multiple threads
Three processes,
single threads
process process process
process
Virtual Processors and Threads 7-5
Consider how UNIX accomplishes multiprocessing.
Every UNIX process has an address space that consists of three segments: text, data, and stack.
The text segment contains the machine instructions that form the program’s executable code.
The stack contains local variables being used by the program’s functions. Finally, the data
segment contains the program’s global and static variables, strings, arrays, and other data.
On a single processor machine that runs 1000 processes, only one process is being executed by
UNIX at any given time. Each process is running for a specific amount of time before it is
pre-empted (interrupted) by the kernel so that the next scheduled process can be run. When
pre-empting a running process, enough information about the process must be saved so that it
can be restarted at a later time. This information is called the context of the process. The context
basically consists of the following components:
n The program counter, which specifies the address of the next instruction to execute.
n The stack pointer, which contains the current address of the next entry in the stack.
n The general purpose registers, which contain the data that the process generates during
its execution.
A Single-Threaded Process
7-5
Program Counter
Stack Pointer
Register Contents
Context
Process
Memory
Program Counter
Stack Pointer
Register Contents
text
stack
data
Process Space
Context
7-6 Virtual Processors and Threads
In UNIX, a context switch occurs when a running process is interrupted by the operating system.
To do this, the operating system saves the context of the currently running process in pre-
allocated data structures in memory and loads the context of the next scheduled process.
Loading the context involves restoring the program counter, the stack pointer, and all general
purpose registers to the values saved by the operating system the last time the waiting process
was pre-empted. Once this is completed, the process continues execution starting at the line of
code specified by the program counter.
A Context Switch
7-6
text
stack
data
Program Counter
Stack Pointer
Register Contents
Program Counter
Stack Pointer
Register Contents
Context of Waiting
Process
Context of Running
Process
CPU
Process Space
Virtual Processors and Threads 7-7
In a multithreaded process, each thread has its own context; that is, its own place in the code
(program counter) and its own data variables. A multithreaded process works much like an
operating system in the way it switches context from one thread to another.
The process itself executes machine instructions to copy the context of the currently running
thread out and copy the next scheduled thread in. This achieves basically the same result as the
operating system doing a context switch. The result is that the program counter points to a new
instruction within the text segment, the stack pointer points to a different area of memory, and
the general purpose registers are restored to the values previously saved for this context. In the
case of IBM Informix server multithreading, the stack for a thread is held in shared memory to
allow it to move between server processes (virtual processors). The default stack size is 32
kilobytes per user thread. The server checks for stack overflow and automatically expands the
stack.
Because a multithreaded process acts like a mini-operating system, it is responsible for handling
things such as:
n Scheduling: The currently running thread decides when to yield control of a process
and transfer control to another thread. The currently running thread also decides which
thread to run next based on an internal prioritization mechanism.
A Multithreaded Process
7-7
Program Counter
Stack Pointer
Register Contents
Program Counter
Stack Pointer
Register Contents
Thread Context
Each thread has a
pointer into the text
of the process.
Thread Context
text
stack
data
stack
stack
Process
Space
Shared Memory
Virtual Processors and Threads 7-9
The processes that make up the database server are known as virtual processors. Each virtual
processor (VP) belongs to a virtual processor class. A VP class is a set of processes responsible
for a specific set of tasks (in the form of threads), such as writing to the logical log or reading
data from disk. This means a VP of a certain class can only run threads of the same class. A VP
can belong to only one class. A VP class can have one or more VPs, which in most cases is
configurable by the system administrator. The name of the VP executable is oninit. All VPs of
all classes are instances of the same program, oninit.
Virtual Processors
7-9
Every process in the server environment is known as a virtual
processor (VP) because it schedules and runs its own threads.
Every VP belongs to a VP class, which is responsible for a
specific set of tasks.
Virtual Processor
(oninit)
Virtual Processor Class
Virtual Processor
(oninit)
Virtual Processor
(oninit)
7-10 Virtual Processors and Threads
A thread is either running on a particular processor, or it is in one of a series of queues. The
ready queue holds the contexts of threads waiting to run on a processor. When a processor is
free, it takes the context of a thread from the ready queue. An internal prioritization mechanism
determines which thread the processor takes from the queue. The processor replaces its current
context with the context of the new thread and continues processing on behalf of that thread.
The ready queue is shared between processors of the same class so that a thread can migrate
between several processors during its lifetime (although the server tends to keep a thread
running on the same process). This mechanism keeps the work balanced between the processes
and assures that a thread runs if any processor is available.
Running a Thread
7-10
Thread 1 context
Thread 2 context
Thread 3 context
Ready QueueVirtual
Processor
Virtual
Processor
To run a thread, the process retrieves the context of a thread
from the ready queue and replaces the context of the process
with the contents of the new thread.
A thread can run on any virtual processor in its class.
Virtual Processors and Threads 7-11
At a specific point of execution, the thread yields control of the virtual processor to another
thread. Some common actions that might cause the thread to yield are:
n Waiting for a disk read or write operation.
n Waiting for a request from the application process.
n Waiting for a lock or other resource.
n No more work needs to be done.
A thread also might yield control to another thread for no reason other than to give another
thread a chance to run.
When a thread yields control, it is responsible for putting its context on a queue to wait or sleep.
The wait queue is used basically to wait on an operation.The sleep queue is used for threads that
need to be awakened after a period of time.
The processor then takes the context of another thread from the ready queue and replaces its
context with the new thread. The processor continues execution with the new context.
Yielding Control to Another Thread
7-11
At a specific point of execution, the thread yields control of the virtual
processor. The thread context is put on either the wait or the sleep queue
(1). Context for another thread is taken from the ready queue and run (2).
Virtual
Processor
12
Thread4 context
Thread8 context
Thread6 context
Ready Queue
Thread1 context
Thread7 context
Thread3 context
Wait Queue
Thread2 context
Thread5 context
Thread9 context
Sleep Queue
7-12 Virtual Processors and Threads
Some of the advantages of server threads are listed below:
n Fewer processes are needed to do the same work. This is particularly efficient in an
OLTP environment when there are a large number of users. You can think of this as fan-
in, where there are a larger number of application processes with a small number of
database server processes handling their requests. An added advantage is that a system
can support more users because fewer processes need to be handled by the operating
system.
n In addition to fan-in capabilities, the server also offers a fan-out capability. Multiple
database server processes can do work for one application.
n The multithreaded architecture replaces much of the context switching done by the
operating system with context switching done by the processes within the database
server. Context switching is faster when done within a process because there is less
information to swap.
n The database server process does its own thread scheduling. This means the DBMS,
rather than the operating system, determines task priority.
n Some features offered by multiprocessor systems make this kind of architecture even
more efficient. For example, we might give an important database server process
exclusive rights to use a particular processor.
Advantages of Server Threads
7-12
What can a multithreaded architecture do for the server?
w Fewer database server processes are needed to do the same
work (fan-in)
w More database server processes can do work for one user (fan-
out)
w Thread synchronization and context switching is faster when
done by the database server rather than the operating system
w The server can do its own thread scheduling
w It is easier to take advantage of scheduling features offered by
hardware vendors
Virtual Processors and Threads 7-13
One of the advantages of the server is its fan-out capabilities. This means that users can take
advantage of multiple-database server processors (and multiple CPUs, if available) working
simultaneously to do their work.
The server creates multiple threads that do the work for one user for the following operations:
n Sorting
n Indexing
n Recovery
Example of Fan-Out
7-13
Virtual Processor
Thread MT Layer
Virtual Processor OS Layer
Hardware LayerCPUCPU
7-14 Virtual Processors and Threads
The slides on the next two pages show the different VP classes available in the server system.
The number of VPs in each VP class can sometimes be configured by the system administrator.
In other cases, they are configured by the server automatically.
n CPU - The CPU VP is where most of the processing occurs. All user threads and some
threads for the server system run on VPs in this class. The purpose of this class is to put
all the CPU intensive activities on these processes so that these processes are kept busy
and do not sleep much (which would cause an expensive operating-system context
switch). No blocking system calls are allowed on this VP, such as activities that read
and write from disk or wait for messages from the application. The administrator can
increase or decrease the number of CPU VPs as needed while the server is up.
n PIO - The PIO VP runs internal threads for the server that perform writes to the
physical log on disk. The PIO VPs are automatically allocated when the server is
started. One PIO VP is usually allocated. If the dbspace with the physical log is
mirrored, then two PIO VPs are allocated.
n LIO - The LIO VP runs internal threads for the server that perform writes to the logical
log on disk. The LIO VPs are automatically allocated when the server is started. One
LIO VP is usually allocated. If the dbspace with the logical log is mirrored, then two
LIO VPs are allocated.
Virtual Processor Classes
7-14
CPU All user threads and some system threads
run in this class. No blocking OS calls
occur.
Configurable
PIO Runs internal threads that write to the
physical log.
1 or 2 VPs
LIO Runs internal threads that write to the
logical log.
1 or 2 VPs
AIO Runs internal threads that perform disk
I/O to cooked chunks, or when kernel I/O is
not turned on.
Configurable
ADT Runs secure auditing threads. 0 or 1 VP
classname User-defined VP that runs UDRs in a
thread-safe manner.
Configurable
MSC Runs threads for miscellaneous tasks. 1 VP
7-16 Virtual Processors and Threads
Other VP classes include:
n SHM - The shared-memory class handles the task of polling for new connections using
the shared memory method of communication to the application. It also handles
incoming messages from the application. The number of shared memory VPs can be
configured before the server is started. If the shared memory method of communication
is not used, then no SHM VP is started.
n STR - The stream-pipe class handles communications by sending and receiving
messages through operating system stream mechanisms. The number of stream-pipe
VPs can be configured before the server is started. If the stream pipe method of
communication is not used, then no STR VP is started.
n TLI - The TLI class handles polling tasks for the TLI programming interface for
TCP/IP or IPX/SPX communication with the application. The number of TLI VPs can
be configured before the server is started. If TCP/IP with TLI is not used, then no TLI
VP is started.
n SOC - The SOC class handles polling tasks for the TCP/IP Berkeley sockets method of
communication with the application. The number of SOC VPs can be configured before
the server is started. If TCP/IP with sockets is not used, then no SOC VP is started.
Virtual Processor Classes (cont.)
7-16
SHM Runs internal shared memory
communication threads
Configurable
STR Runs internal stream pipes
communications threads
Configurable
TLI Runs internal TLI network communication
threads.
Configurable
SOC Runs internal Sockets network
communication threads.
Configurable
ADM Runs the timer. 1 VP
OPT Handles BLOB transfer to an optical
subsystem.
0 or 1 VP
JVP Executes Java UDRs. Contains the Java
Virtual Machine (JVM)
Configurable
7-18 Virtual Processors and Threads
Use the VPCLASS configuration parameter to specify the number of VPs to start for a specified
VP class when your server is brought from Offline to Online mode. The number of CPU, AIO,
SHM, STR, TLI, SOC, JVP, and user-defined VPs can be configured using this parameter. The
format for this configuration parameter is as follows:
VPCLASS vp-class[,options]
The options available for VPCLASS are shown here:
num=numvps
max=maxvps
aff=processor# or aff=first_processor#,last_processor#
noage
noyield
numvps is the number of VPs to start for the specified vp-class. Since virtual processors can be
dynamically allocated, maxvps sets a limit to the total number of VPs that can be allocated for
this class. Note that the configuration parameter options are separated by a comma and that there
are no spaces between the options. Here is an example of a configuration parameter setting to
start 3 CPU VPs on server startup with a maximum limit of 5 VPs:
VPCLASS cpu,num=3,max=5
VPCLASS Configuration Parameter
7-18
The VPCLASS configuration parameter allows you to customize the
properties of a virtual processor class. Configure this parameter by
specifying:
w Virtual-processor class name
w Number of VPs to start
w Maximum number of VPs allowed
w Number of processors to affinity
w Disable priority aging
w Do not yield VP to other routines (user-defined VP only)
7-20 Virtual Processors and Threads
MULTIPROCESSOR
The MULTIPROCESSOR configuration parameter specifies whether you wish to turn on
specific multiprocessor features in the server system, such as changes in default parameters for
VPs, default read-ahead parameters, etc. A parameter value of 1 activates these features.
Spin Locks
This parameter also determines if the server can use spin locks. If you set MULTIPROCESSOR
to 1, server threads that are waiting for locks (known as mutexes in IBM Informix servers) in
some cases spin (keep trying at short intervals) instead of being put on a wait queue.
SINGLE_CPU_VP
Setting the SINGLE_CPU_VP configuration parameter to 1 indicates that you intend to use only
one CPU VP on your server. This prevents the server from being started with more than one
CPU VP and does not allow you to dynamically add CPU or user-defined VPs. If the server is
restricted to only one CPU VP, it can bypass locking of some internal data structures (with
mutex calls) because no other process is using these structures.
Multiprocessor Configuration
7-20
CPU
CPU
CPU
CPU CPU
CPU
Multiprocessor SystemSingle Processor System
CPU
VP
CPU
VP
CPU
VP
CPU
VP
CPU
VP
VPCLASS cpu,num=1
MULTIPROCESSOR 0
SINGLE_CPU_VP 1
VPCLASS cpu,num=4
MULTIPROCESSOR 1
SINGLE_CPU_VP 0
7-22 Virtual Processors and Threads
In IBM Informix database servers, the client application can connect to the server by either
shared memory, stream pipes, or a network connection using TLI or sockets. You can combine
the shared memory method of communication with the network connection method of
communication within the same server.
Using the shared memory connection, the application communicates with the server by placing
and retrieving messages at an address in shared memory.
Communication through a network
本文档为【07vps】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。