null
CS354: Operating Systems
CS354: Operating Systems
Gustavo Rodriguez-Rivera
Computer Science Department Purdue University
General InformationGeneral InformationWeb Page: http://www.cs.purdue.edu/homes/cs354
Office: LWSN1169
E-mail: grr@cs.purdue.edu
Textbook:
Silberschatz, Galvin and Gagne. Operating system concepts. Addison-Wesley. (Good reading.)
Recommended:
Advanced Programming in the UNIX Environment by W. Richard Stevens. (Useful for the shell. Good as a reference book.)
PSOsPSOsThere is no PSO the first week.
The projects will be explained in the PSOs.
E-mail questions to
cs354-ta@cs.purdue.edu
You can attend the second hour of any PSO even when it is not your assigned PSO. To ask questions or work on your project.
GradingGradingGrade allocation
Midterm: 25%
Final: 25%
Projects: 50%
Exams also include questions about the projects.Course OrganizationCourse Organization
IMPORTANT: The content of the course is still being rewritten so the slides will change.
Check the slides before every lecture.
Course OrganizationCourse OrganizationVirtual Memory (VM)
DOS and the early versions of MacOS didn't have virtual memory, but modern operating systems do.
Can optimize the execution of programs
Uses of VM
Only reading what is needed (i.e. "This page of the executable is not there, so I must load it")
Extends memory requirements
Speeds up program loading
Speeds up process creation
Course OrganizationCourse OrganizationMemory Allocation
Explicit Memory Management (call malloc/free)
Garbage Collection (The runtime collects unused memory)
Data Structures used for memory allocators.
Problems of calling free()
Premature free()
Memory leaks
Freeing non heap objects (can crash program)
Memory overwrites beyond the end of the object.Course OrganizationCourse OrganizationProgram Structure
Memory of a program
Memory Sections
What is a Program
Life-cycle of a program
Basics of Computer Architecture
Von Newman Architecture
Memory Hierarchy
Course OrganizationCourse OrganizationProcesses
Instance of a program that is running
Loading a program
Life-cycle of a program
Scheduling
Programs run for only a "quantum" of time before a context switch where a quantum is typically 10 milliseconds (1/100secs).
How to execute new processes/programs
exec(), fork(), pipes Course OrganizationCourse OrganizationThreads
Concurrency
Useful for server applications to process multiple clients concurrently.
Also useful for user applications (one thread to update screen, another for input, etc.)
One can attain the same effect using multiple processes, but multiple threads is usually faster
Synchronization
Mutex locks
Semaphores Course OrganizationCourse OrganizationThreads (cont)
Condition variables
Producer/Consumer relationships in code (i.e. adding elements to a table in one thread, removing them from the table in another)
Deadlocks and prevention (Thread A waits for Thread B and Thread B waits for Thread A)
If it occurs, one usually has to "bounce the system" (kill the process and restart it) to fix it Course OrganizationCourse OrganizationI/O
File Systems
Virtual Machines
Loadable Kernel Modules
Kernel Programming Virtual MemoryVirtual Memory IntroductionVirtual Memory IntroductionVM allows running processes that have memory requirements larger than available RAM to run in the computer.
f the following processes are running with the noted requirements:
IE (200MB),
MSWord (200MB),
Skype (70MB)
Operating System (1GB).
This would require 1.47GB of memory when there may only be 1GB of RAM available Virtual Memory IntroductionVirtual Memory IntroductionVM only keeps in RAM the memory that is currently in use.
The remaining memory is kept in disk in a special file called "swap space"
The VM idea was created by Peter Dening a former head of the CS Department at Purdue
Other Uses of Virtual MemoryOther Uses of Virtual MemoryAnother goal of VM is to speed up some of the tasks in the OS for example:
Loading a program. The VM will load pages of the program as they are needed, instead of loading the program all at once.
During fork the child gets a copy of the memory of the parent. However, parent and child will use the same memory as long as it is not modified, making the fork call faster. This is called “copy-on-write”.
Shared Libraries across processes.
Shared memory
There are other examples that we will cover later.
VM ImplementationsVM ImplementationsProcess Swapping:
The entire memory of the process is swapped in and out of memory
Segment Swapping
Entire parts of the program (process) are swapped in and out of memory (libraries, text, data, bss, etc.
Problems of process swapping and segment swapping is that the granularity was too big and some pieces still in use could be swapped out together with the pieces that were not in use.
Paging
Used by modern OSs. Covered in detail here.
PagingPagingImplementation of VM used by modern operating systems.
The unit of memory that is swapped in and out is a page
Paging divides the memory in pages of fixed size.
Usually the size of a page is 4KB in the Pentium (x86) architecture and 8KB in the Sparc Ultra Architecture.
PagingPaging Address in bytes040968192232-1=4G-1.
.
.VM Address in pages (page numbers)0120x000000000x000010000x000020000xFFFFFFFF232/4KB-1 =220-1=1M-1RAM page 5Swap page 456RAM page 24RAM page 10RAM page 3Swap page 500Executable page 2 Not mapped(invalid)PagingPagingThe Virtual Memory system will keep in memory the pages that are currently in use.
It will leave in disk the memory that is not in use.
Backing StoreBacking StoreEvery page in the address space is backed by a file in disk, called backing-store
Swap SpaceSwap SpaceSwap space is a designated area in disk that is used by the VM system to store transient data.
In general any section in memory that is not persistent and will go away when the process exits is stored in swap space.
Examples: Stack, Heap, BSS, modified data etc.Swap SpaceSwap Space lore 208 $ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 1032130 275238 705286 29% /
/proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
/dev/dsk/c0t0d0s4 2064277 1402102 600247 71% /var
swap 204800 2544 202256 2% /tmp
/dev/dsk/c0t2d0s6 15493995 11682398 3656658 77% /.lore/u92
/dev/dsk/c0t3d0s6 12386458 10850090 1412504 89% /.lore/u96
/dev/dsk/c0t1d0s7 15483618 11855548 3473234 78% /.lore/u97
bors-2:/p8 12387148 8149611 4113666 67% /.bors-2/p8
bors-2:/p4 20647693 11001139 9440078 54% /.bors-2/p4
xinuserver:/u3 8744805 7433481 1223876 86% /.xinuserver/u3
galt:/home 5161990 2739404 2370967 54% /.galt/home
xinuserver:/u57 15481270 4581987 10775435 30% /.xinuserver/u57
lucan:/p24 3024579 2317975 676359 78% /.lucan/p24
ector:/pnews 8263373 359181 7821559 5% /.ector/pnews
Swap SpaceSwap Space
lore 206 $ /usr/sbin/swap -s
total: 971192k bytes allocated + 1851648k reserved = 2822840k used, 2063640k available
lore 207 $ /usr/sbin/swap -l
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 32,1025 16 2097392 1993280
/dev/dsk/c0t1d0s1 32,1033 16 2097392 2001792Implementation of PagingImplementation of PagingPaging adds an extra indirection to memory access.
This indirection is implemented in hardware, so it does not have excessive execution overhead.
The Memory Management Unit (MMU) translates Virtual Memory Addresses (vmaddr) to physical memory addresses (phaddr).
The MMU uses a page table to do this translation.
PagingPagingThere are two types of addresses:
Virtual Memory Addresses: the address that the CPU is using. Addresses used by programs are of this type.
Physical Memory Addresses: The addresses of RAM pages. This is the hardware address.
The MMU translates the Virtual memory addresses to physical memory addressesThe Memory Management UnitThe Memory Management Unit CPUMemory CacheMemory Management Unit (MMU)Translation Look-Aside Buffer (TLB)Page Table RegisterPage TableRAMI/OAddress BusData BusVM AddressPhysical (hardware) AddressThe Memory Management UnitThe Memory Management UnitThe MMU has a Page Table Register that points to the current page table that will be used for the translation.
Each process has a its own page table.
The page table register is updated during a context switch from one process to the other.
The page table has the information of the memory ranges that are valid in a process
The Memory Management UnitThe Memory Management UnitThe value of the page table register changes every time time there is a context switch from one process to another.
Consecutive pages in Virtual memory may correspond to non-consecutive pages in physical memory.The Memory Management UnitThe Memory Management UnitTo prevent looking up the page table at every memory access, the most recent translations are stored in the Translation Look-Aside buffer (TLB).
The TLB speeds up the translation from virtual to physical memory addresses.
A page fault is an interrupt generated by the MMU
VM to Hardware Address TranslationVM to Hardware Address TranslationThe VM address is divided into two parts:
Page number (higher 20 bits)
Offset (Lower 12 bits: 0-4095) (Assuming page size=4096 or 212)
Only the page number is translated. The offset remains the same
Example: in 0x2345, the last 3 hex digits (12 bits) is the offset: 0x345. The remaining digits is the page number (20 bits): 0x231 12 11 0Page numberOffsetVM to Hardware Address TranslationVM to Hardware Address Translation Page NumberOffsetVM AddressPage TablePage NumberOffsetHardware Address012…232/212-1 =220-1789625367429VM to Hardware Address Translation (one-level page table)VM to Hardware Address Translation (one-level page table) 0x20x345VM Address 0x2345Page Table0x345Hardware Address012…232/212-1 =220-10x7890x6250x7670x429Page NumberPage NumberOffsetOffset0x767VMaddr=0x2345 pagenum=0x2 offset=0x345haddr=0x767345 pagenum=0x767 offset=0x345Two-Level page tablesTwo-Level page tablesUsing a one-level page table requires too much space: 220 entries * 4 bytes/entry =~ 4MB.
Since the virtual memory address has a lot of gaps, most of these entries will be unused.
Modern architectures use a multi-level page table to reduce the space neededTwo-Level page tablesTwo-Level page tablesThe page number is divided into two parts: first-level page number and the second-level page number
Example: VM address:0x00402657
Offset=0x657 (last 3 hex digits)
2nd level index (j)= 0x2, 1st level index (i) = 0x1
First-level index (i) (10 bits)Second-level index (j) (10 bits)Offset (12 bits)First levelSecond levelOffset0000 0000 0100 0000 0010 0110 0101 0111VM Address TranslationVM Address Translation VM address1st level (i)2nd level (j)offset31 22 21 12 11 0First Level Page Table
(one for each process).Second Level Page Tables (multiple tables for each process.)i0x450000x700000210-10x450000x450000x700000x4500024 5 7 10 9 0 1 2 3 4 5 6 7 8 9 10 11Page Number Physical MemVM Address TranslationVM Address Translation VM address:0x00402657i=0x12nd leveloffset31 22 21 12 11 0First Level Page TableSecond Level Page Tables0x700000x450000210-10x650000x650000x700000x4500024 5 7 9 0 1 2 3 4 5 6 7 8 9 … Page Number Physical Mem
Page number in physical address=0x21st levelj=0x20x657112ij210-1ExampleExampleVMaddress: 0x00402 657
Physical Memory Address: 0x2 657
1.From the VM address find i, j, offset
2. SecondLevelPageTable= FirstLevelPageTable[i]
3. PhysMemPageNumber = SecondLevelPageTable[j]
4. PhysMemAddr= PhysMemPageNum*Pagesize + offset
Process always have a first-level page table
Second level page tables are allocated as needed.
Both the first level and second level page tables use 4KB.Page BitsPage BitsEach entry in the page table needs only 20 bits to store the page number. The remaining 12 bits are used to store characteristics of the page.
Resident Bit:
Page is resident in RAM instead of swap space/file.
Modified Bit:
Page has been modified since the last time the bit was cleared. Set by the MMU.
Access Bit:
Page has been read since the last time the bit was cleared. Set by MMU
Permission:
Read page is readable
Write Page is writable
Execute Page can be executed (MMU enforces permissions)Page BitsPage BitsIf a CPU operation exceeds the permissions of a page, the MMU will generate an interrupt (page fault). The interrupt may be translated into a signal (SEGV, SIGBUS) to the process.
If a page is accessed and the page is not resident in RAM, the MMU generates an interrupt to the kernel and the kernel loads that page from disk.
Types of Page FaultTypes of Page FaultPage Fault
Page not Resident: Page not in Physical Memory, it is in disk
Protection Violation: Write or Access permission (as indicated by page bits) violated.
Processing a Page FaultProcessing a Page FaultA program tries to read/write a location in memory that is in a non-resident page. This could happen when:
fetching the next instruction to execute or
trying to read/write memory not resident in RAM
2. The MMU tries to look up the VM address and finds that the page is not resident using the resident bit. Then the MMU generates a page fault, that is an interrupt from the MMU
3. Save return address and registers in the stack
Processing a Page FaultProcessing a Page Fault4. The CPU looks up the interrupt handler that corresponds to the page fault in the interrupt vector and jumps to this interrupt handler
5. In the page fault handler
If the VM address corresponds to a page that is not valid for this process, then generate a SEGV signal to the process. The default behavior for SEGV is to kill the process and dump core
Otherwise, if VM address is in a valid page, then the page has to be loaded from disk.
Processing a Page FaultProcessing a Page Fault6. Find a free page in physical memory. If there are no free pages, then use one that is in use and write to disk if modified
7. Load the page from disk and update the page table with the address of the page replaced. Also, clear the modified and access bits
8. Restore registers, return and retry the offending instruction
Processing a Page FaultProcessing a Page FaultThe page fault handler retries the offending instruction at the end of the page fault
The page fault is completely transparent to the program, that is, the program will have no knowledge that the page fault occurred.
Page Replacement PoliciesPage Replacement PoliciesDuring a page fault for a non-resident page
Find an empty page in physical memory
If RAM is full, find a page that can be replaced.
Load the non-resident page from disk into RAM
What page to replace? Two policies:
FIFO - First In First Out
LRU - Least Recently UsedFIFO Page Replacement PolicyFIFO Page Replacement PolicyFirst In First Out. Replace the page that has been in RAM the longest.
Example. Assume 3 pages of RAMXXX1XX15X15315315316536136136156165VM Pages Referenced:missmissmissmissmisshitmisshit2 Hits
6 missesLRU Page Replacement PolicyLRU Page Replacement PolicyLeast recently Used. Replace the page that has not been used recently.
This is a good predictor of what page will not be used in the short future.XXX1XX15X15315315311631631631656165VM Pages Loaded:missmissmissmissmisshithithit3 Hits
5 missesLRU Page Replacement PolicyLRU Page Replacement PolicyLRU works because of the intrinsic “Temporal Locality” of a program.
Temporal Locality: It is the tendency of a program to use a group of pages more frequently than others.
Example: 10% of the code is executed 90% of the time. 10% of the variables are used 90% of the time
Working SetWorking SetA working set is the set of pages that the program currently needs in RAM to run without having to swap in from disk other pages.
It is important for the OS to keep the working set of a process in RAM to reduce the cost of swapping in and swapping out.Implementation of LRUImplementation of LRUTo implement LRU we would need to have a timestamp in each page to know when was the last time page was accessed.
This timestamp is approximated using the accessed and modified bits.Clock Algorithm (Second Chance Algorithm)Clock Algorithm (Second Chance Algorithm)Initially all the access bits are clear.
All RAM pages are in a list.
If a page needs to be replaced, start the search one page after the last page replaced (one after where the last search ended)
If the page has the access bit set, clear the bit and skip that page.
If the page has the bit not set, use that page for replacement.
When the end of the list is reached start from the top.
In this algorithm, pages accessed since the last time they were visited get another chance.Clock Algorithm (Second Chance Algorithm)Clock Algorithm (Second Chance Algorithm) 123Page4511010Access BitStart00010FoundAccess BitClock Algorithm (Second Chance Algorithm)Clock Algorithm (Second Chance Algorithm) 123Page4500110Access BitStart00100FoundAccess BitClock Algorithm (Second Chance Algorithm)Clock Algorithm (Second Chance Algorithm) 123Page4510111Access BitStart00111FoundAccess BitAssume page 1 is accessed so it will be skipped.Improved Clock AlgorithmImproved Clock AlgorithmIn the simple second chance if a referenced page gets another chance. The access bit goes from
1 -> 0->Replacement.
An improved second chance Also uses the modified bit.
Access Modified
0 0 Best candidate 0 chances
0 1 Not as bad candidate 2 chances
0 Better Candidate 1 chances
1 1 Worst Candidate 3 chances
Improved Clock AlgorithmImproved Clock Algorithm
When visiting a page the access and modified bits are changed in this way correspondingly:
11 -> 01 -> 10-> 00->Replacement.
Modified bits are saved into OS data structures before updating
In this way a page that has been accessed and modified will get three chances before it is being replaced.Using mmapUsing mmap
The mmap() function establishes a mapping between a process's address space and a file or shared memory object.
#include
void *mmap(void *addr, size_t len, int prot,
int flags, int fildes, off_t off);
Mmap returns the address of the memory mapping and it will be always aligned to a page size (addr%PageSize==0).
The data in the file can be read/written as if it were memory.Using mmapUsing mmap Memory0x000000000xFFFFFFFFDiskFilemma
ppt
关于艾滋病ppt课件精益管理ppt下载地图下载ppt可编辑假如ppt教学课件下载triz基础知识ppt
r=
0x00020000ptr = mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0)Mmap parametersMmap parametersvoid *mmap(void *addr, size_t len, int prot,
int flags, int fildes, off_t off);
addr –
Suggested address. If NULL is passed the OS will choose the address of the mapping.
len –
Length of the memory mapping. The mmaped file should have this length of larger or the program gets SEGV on access.
prot –
Protections of the mapping: PROT_READ, PROT_WRITE, PROT_EXEC, PROT_NONE.Mmap parametersMmap parametersflags: - Semantics of the mapping:
MAP_SHARED – Changes in memory will be done in the file
MAP_PRIVATE – Changes in memory will be kept private to the process and will not be reflected in the file. This is called “copy-on-write”
MAP_FIXED – Force to use “addr” as is without changing. You should know what you are doing since the memory may be already in use. Used by loaders
MAP_NORESERVE– Do not reserve swap space in
advance. Allocate swap space as needed.
MAP_ANON – Anonimous mapping. Do not use any fd (file).
Use swap as the backing store. This option
is used to allocate memory
Fd –
The file descriptor of the file that will be memory mapped. Pass –1 if MAP_ANON is used.
Offset –
Offset in the file where the