Ticker

6/recent/ticker-posts

50 Subjective question operating system for competitive exam

Q1.Explain Scheduling Objectives.

Be Fair while allocating resources to the processes  Maximize throughput of the system  Maximize number of users receiving acceptable response times.  Be predictable  Balance resource use  Avoid indefinite postponement  Enforce Priorities  Give preference to processes holding key resources  Give better service to processes that have desirable behaviour patterns.

Q2.Explain Scheduling Criteria.

There are several different criteria to consider when trying to select the "best" scheduling algorithm for a particular situation and environment, including: o CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. ) o Throughput - Number of processes completed per unit time. May range from 10 / second to 1 / hour depending on the specific processes. o Turnaround time - Time required for a particular process to complete, from submission time to completion. o Waiting time - How much time processes spend in the ready queue waiting their turn to get on the CPU. o Response time - The time taken in an interactive program from the issuance of a command to the commence of a response to that command.



Q3.Consider the processes P1, P2, P3 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time,
Process        A.T         B.T
p1              0           24    
p2              0            3
p3              0            3
find average  waiting time,turnaround time,throughput.


p1 p2 p3 0 24 27 30 waiting time of p1=24-24-0=0 waiting time of p2=27-3-0=24 wating time of p3=30-3-0=27 Total Wait Time = 0 + 24 + 27 = 51 ms Average Waiting Time = (Total Wait Time) / (Total number of processes) = 51/3 = 17 ms Total Turn Around Time: 24 + 27 + 30 = 81 ms Average Turn Around time = (Total Turn Around Time) / (Total number of processes) = 81 / 3 = 27 ms Throughput = 3 jobs/30 sec = 0.1 jobs/sec

Q4.Diff b/w primitive and non-premetive scheduling.

Preemptive Scheduling Basic-The resources are allocated to a process for a limited time. Interrupt-Process can be interrupted in between. If a high priority process frequently arrives in the ready queue, low priority process may starve. Preemptive scheduling has overheads of scheduling the processes. Preemptive scheduling is flexible. Preemptive scheduling is cost associated. Non Preemptive Scheduling Once resources are allocated to a process, the process holds it till it completes its burst time or switches to waiting state. Process can not be interrupted till it terminates or switches to waiting state. If a process with long burst time is running CPU, then another process with less CPU burst time may starve. Non-preemptive scheduling does not have overheads. Non-preemptive scheduling is rigid. Non-preemptive scheduling is not cost associative.

Q5 Explain shortest job first and advantage of shortest job first.


Process which have the shortest burst time are scheduled first.  If two processes have the same bust time, then FCFS is used to break the tie.  This is a non-pre-emptive, pre-emptive scheduling algorithm.  Best approach to minimize waiting time.  Easy to implement in Batch systems where required CPU time is known in advance.  Impossible to implement in interactive systems where required CPU time is not known.  The processer should know in advance how much time process will take.  Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time First (SRTF). Advantages-  SRTF is optimal and guarantees the minimum average waiting time.  It provides a standard for other algorithms since no other algorithm performs better than it. Disadvantages-  It can not be implemented practically since burst time of the processes can not be known in advance.  It leads to starvation for processes with larger burst time.  Priorities can not be set for the processes.  Processes with larger burst time have poor response time



Q6.Consider the set of 5 processes whose arrival time and burst time are given below
Process       A.T           B.T
p1             3             1  
p2             1             4 
p3             4             2 
p4             0             6
p5             2             3

If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turnaround time.

 


p4 p1 p3 p5 p2 0 6 7 9 12 16 Turn Around time = Exit time – Arrival time Waiting time = Turn Around time – Burst time Turn Around time 7 – 3 = 4 16 – 1 = 15 9 – 4 = 5 6 – 0 = 6 12 – 2 = 10 Waiting time 4 – 1 = 3 15 – 4 = 11 5 – 2 = 3 6 – 6 = 0 10 – 3 = 7 Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit  Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit



Q7 Explain round robin scheduling with advantage.


CPU is assigned to the process on the basis of FCFS for a fixed amount of time.  This fixed amount of time is called as time quantum or time slice.  After the time quantum expires, the running process is preempted and sent to the ready queue.  Then, the processor is assigned to the next arrived process.  It is always preemptive in nature. Advantages-  It gives the best performance in terms of average response time.  It is best suited for time sharing system, client server architecture and interactive system. Disadvantages-  It leads to starvation for processes with larger burst time as they have to repeat the cycle many times.  Its performance heavily depends on time quantum.  Priorities can not be set for the processes. With decreasing value of time quantum,  Number of context switch increases  Response time decreases  Chances of starvation decreases Thus, smaller value of time quantum is better in terms of response time. With increasing value of time quantum,  Number of context switch decreases  Response time increases  Chances of starvation increases Thus, higher value of time quantum is better in terms of number of context switch.  With increasing value of time quantum, Round Robin Scheduling tends to become FCFS Scheduling.  When time quantum tends to infinity, Round Robin Scheduling becomes FCFS Scheduling.  The performance of Round Robin scheduling heavily depends on the value of time quantum.  The value of time quantum should be such that it is neither too big nor too small.



Q8 Consider the set of 5 processes whose arrival time and burst time are given below
 Process               A.T                B.T
  p1                    0                  5
  p2                    1                  3
  p3                    2                  1
  p4                    3                  2
  p5                    4                  3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate
the average waiting time and average turnaround time.
 


p1 p2 p3 p1 p4 p5 p2 p1 p5 0 2 4 5 7 9 11 12 13 14 Now, we know-  Turn Around time = Exit time – Arrival time  Waiting time = Turn Around time – Burst time Turnaround time p1=13 – 0 = 13 p2=12 – 1 = 11 p3=5 – 2 = 3 p4=9 – 3 = 6 p5=14 – 4 = 10 waiting time p1=13 – 5 = 8 p2=11 – 3 = 8 p3=3 – 1 = 2 p4=6 – 2 = 4 p5=10 – 3 = 7 Now,  Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit  Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit



Q9 Explain priority scheduling.


 Out of all the available processes, CPU is assigned to the process having the highest priority.  In case of a tie, it is broken by FCFS Scheduling.  Priority Scheduling can be used in both preemptive and non-preemptive mode.  The waiting time for the process having the highest priority will always be zero in preemptive mode.  The waiting time for the process having the highest priority may not be zero in nonpreemptive mode. Priority scheduling in preemptive and non-preemptive mode behaves exactly same under following conditions-  The arrival time of all the processes is same  All the processes become available Advantages-  It considers the priority of the processes and allows the important processes to run first.  Priority scheduling in pre-emptive mode is best suited for real time operating system. Disadvantages-  Processes with lesser priority may starve for CPU.  There is no idea of response time and waiting time.



Q10 Consider the set of 5 processes whose arrival time and burst time are given below
Process        A.T         B.T        Priority
p1              0           4            2
p2              1           3            3
p3              2           1            4
p4              3           5            5 
p5              4           2            5

If the CPU scheduling policy is priority non-preemptive, calculate the average waiting time
and average turnaround time. (Higher number represents higher priority).



p1 p4 p5 p3 p2 0 4 9 11 12 15 Now, we know-  Turn Around time = Exit time – Arrival time  Waiting time = Turn Around time – Burst time Turn Around time p1=4 – 0 = 4 p2=15 – 1 = 14 p3=12 – 2 = 10 p4=9 – 3 = 6 p5=11 – 4 = 7 Waiting time p1=4 – 4 = 0 p2=14 – 3 = 11 p3=10 – 1 = 9 p4=6 – 5 = 1 p5=7 – 2 = 5 Now,  Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit  Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit



Q11 State and explain Storage management strategies?


There are many strategies are used to obtaining the best possible use of the main storage. Some of these strategies are: 1. Fetch strategies are concerned with when to obtain the next piece of program or data for transfer to main storage from secondary storage. There are two approaches. A) Demand fetch strategies: The next piece of program or data is brought into the main memory when it is referenced by a running program. B) Anticipatory fetch strategies: They make predict and guesses and anticipating the future where program control will go next. 2. Placement strategies: They are concerned with determining where in main storage to place a new program. Strategies are most commonly used to select a free hole from the set of available holes. There are three placement strategies: A. First fit. The incoming job places in the first hole that is big enough. It does not need search all the hole set. It is faster strategies to placement. B. Best fit. Allocate the smallest hole that is big enough. It needs to search the entire list, unless the list is ordered by size. This strategy produces the smallest leftover hole. C. Worst fit. Allocate the largest hole enough. Again, it must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach. 3. Replacement strategies: They are concerned with determining which piece of program of data to displace to make room for a new program or data.



Q12 Define the virtual memory? What are its advantages?


Virtual memory is a technique that allows the execution of processes that are not completely in memory. Advantages: 1) Enables users to run programs that are larger than actual physical memory. 2) VM makes the task of programming much easier. 3) Virtual memory allows processes to share files easily and to implement shared memory. 4) It provides an efficient mechanism for process creation.



Q13When will the page faults occur? What is the procedure for handling the page fault?


The page fault will occur when the process tries to access a page which was not into main memory. The procedure for handling the page fault is: 1) Check an internal table for this process to determine whether the reference was a valid or an invalid memory access. 2) If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that page, we now page it in. 3) Find a free frame. 4) Schedule a disk operation to read the desired page into the newly allocated frame. 5) When the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory. 6) Restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory.



Q14 How can measure the performance of demand paging?


To measure the demand paging , the effective access time for a demand –paged memory is calculated by: Effective access time = (1 – p) x ma + p x page fault time Where p: The probability of page fault, 0 < p < 1; ma: Memory access time , ranges from 10 to 200 nanosecond.



Q15 Assume an average page-fault service time is 25 milliseconds and a memory access time is 100 nanoseconds. Find the Effective Access Time?.


Effective Access Time (EAT)= (1 – p) x (ma) + p x (page fault time) = (1 – p) x 100 + p x 25,000,000 = 100 – 100 x p + 25,000,000 x p



Q16. : What are the steps to modify the page-fault service routine to include page replacement?


1. Find the location of the desired page on the disk. 2. Find a free frame: a) If there is a free frame, use it. b) If there are no free frames, use a page-replacement algorithm to select a victim frame. c) Write the victim frame to the disk, change the pages table. 3. Read the desired page and store it in the free frame. Adjust the page table. 4. Restart the user process.



Q17 : Define (1) Cooperating process (2) Independent process (3) Race condition (4) Critical section (5) Semaphore


1) Cooperating process: It is the process than can affect or be affected by other processes executing in the system. 2) Independent process: it is the process than cannot affect or be affected by other processes executing in the system 3) Race condition: When several processes access and manipulate the same data at the same time of the execution depends on the particular order in which the access takes place, is called a race condition. 4) Critical section: It is a segment of code in system process in which the process may be changing common variables, updating a table, writing a file, and so on. 5) Semaphore is a synchronization tool which can be used to deal with the critical-section problem (does not require busy waiting). A semaphore S has two standard operations: wait( ) and signal( ). The wait( ) operation was termed P, signal( ) was called V.



Q18. What are the requirements of solution the critical-section problem?


The requirements are: 1. Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical section. 2. Progress. If no process is executing in its critical section and some processes wish to enter their critical section, then only those processes that are not executing in their remainder sections can participate in the decision on which will enter its CS next, and this selection cannot be postponed indefinitely. (No process should have to wait forever to enter its critical section. 3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.



Q19.Define the semaphore operations?


The classical definitions of wait and signal are:  The P operation (wait) on semaphore S, written P(S), operates as follows: if S > 0 then S := S – 1; else (wait on S)  The V operation (signal) on semaphore S, written V(S), operates as follows: if (one or more processes are waiting on S) then (let one of these processes proceed) else S:= S + 1;



Q20. Consider the following page reference using three frames that are initially empty. Find the page faults using LRU algorithm, where the page reference sequence: 7,0,1, 2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1?


7 0 1 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0 1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7 Page fault= 12



Q21 What are the advantages of using pager (i.e. demand paging)?


1) Less swap time 2) Less I/O needed 3) Less amount of physical memory needed 4) More users 5) Faster response time



Q22 Consider a program consists of five segments: S0 = 600, S1 = 14 KB, S2= 100 KB, S3 =580 KB, and S4 = 96 KB. Assume at that time, the available free space partitions of memory are 1200–1805, 50 – 150, 220-234, and 2500-3180. Find the following: 1. Draw logical to physical maps and segment table? 2. Allocate space for each segment in memory? 3. Calculate the external fragmentation and the internal fragmentation? 4. What are the addresses in physical memory for the following logical addresses: 0.580, (b) 1.17 (c) 2.66 (d) 3.82 (e) 4.20?


External fragmentation =0. Internal fragmentation = (160-150) +(1805-1800) + (3180-3176) = 10 + 5 + 4 = 19 Fragmentation = external fragmentation + internal fragmentation = 0 + 19 = 19 4. The physical addresses are a) 0.580the physical address of 0.580 = 1200+580 = 1780. b) 1.17  because d > limit of S1, the address is wrong. c) 2.66 the physical address of 2.66= 50 + 66 = 116 d) 3.82 the physical address is of 3.82 is = 2500 + 82 = 2582 e) 4.20  the physical address 4.20 = 3080+20 = 3100



Q23Segmentation and paging storage?


Segmentation The physical memory is breaking into variable-sized blocks called segments Address generated by CPU is divided into segment number (S) and segment offset (d). Each segment has a name and a length (variable size). Physical address = segment base + offset. May be external fragmentation Paging The physical memory is breaking into fixed-sized blocks called frames and logical memory is breaking into blocks of the same size called pages. Address generated by CPU is divided into Page number (p) and Page offset (d). A pages are fixed size. Physical address = page size * frame number + offset. No external fragmentation



Q24 Contiguous and non – contiguous storage allocation


Contiguous storage 1 A program occupies a single contiguous block of storage locations. 2 Faster in store 3 More waste in memory. Non-contiguous storage A program is divided into several blocks or segments that may be placed in memory in pieces not necessary adjacent to one another. Slower in store. Less waste in memory



Q25 First-fit placement and best-fit placement?


First-fit 1 The incoming job is placed in the first hole that is big enough. 2 It does not need search the entire hole set to place the job. 3 It is faster to allocate. 4 More waste in storage Best-fit The incoming job is placed in the small hole that is big enough. It need search the entire hole set to place the job. It is slower. Less waste in storag



Q26.Explain Multilevel Queue Scheduling


A multi-level queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm. Let us consider an example of a multilevel queue-scheduling algorithm with five queues: 1. System Processes 2. Interactive Processes 3. Interactive Editing Processes 4. Batch Processes 5. Student Processes Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process will be pre-empted.



Q27 Explain deadlock.


Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process.  For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1. Deadlock can arise if following four necessary conditions hold simultaneously. 1. Mutual Exclusion: One or more than one resource are non-sharable means Only one process can use at a time. 2. Hold and Wait: A process is holding at least one resource and waiting for another resources. 3. No Pre-emption: A resource cannot be taken from a process unless the process releases the resource means the process which once scheduled will be executed till the completion and no other process can be scheduled by the scheduler meanwhile. 4. Circular Wait: A set of processes are waiting for each other in circular form means All the processes must be waiting for the resources in a cyclic manner so that the last process is waiting for the resource which is being held by the first process.



Q28. Deadlock Handling strategies.


The various strategies for handling deadlock are1. Deadlock Prevention 2. Deadlock Avoidance 3. Deadlock Detection and Recovery 4. Deadlock Ignorance 1. Deadlock Prevention  Deadlocks can be prevented by preventing at least one of the four required conditions: Mutual Exclusion  Shared resources such as read-only files do not lead to deadlocks.  Unfortunately, some resources, such as printers and tape drives, require exclusive access by a single process. Hold and Wait  To prevent this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others. No Preemption  Preemption of process resource allocations can prevent this condition of deadlocks, when it is possible. Circular Wait  One way to avoid circular wait is to number all resources, and to require that processes request resources only in strictly increasing ( or decreasing ) order. 2. Deadlock Avoidance  In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe state at every step which the operating system performs.  The process continues until the system is in safe state.  Once the system moves to unsafe state, the OS has to backtrack one step.  In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in the system. 3. Deadlock detection and recovery  This strategy involves waiting until a deadlock occurs.  After deadlock occurs, the system state is recovered.  The main challenge with this approach is detecting the deadlock. 4. Deadlock Ignorance  This strategy involves ignoring the concept of deadlock and assuming as if it does not exist.  This strategy helps to avoid the extra overhead of handling deadlock.  Windows and Linux use this strategy and it is the most widely used method.



Q29.Distributed Operating Systems


In distributed system, the different machines are connected in a network and each machine has its own processor and own local memory.  In this system, the operating systems on all the machines work together to manage the collective network resource.  It can be classified into two categories: 1. Client-Server systems 2. Peer-to-Peer systems Advantages of distributed systems.  Resources Sharing  Computation speed up – load sharing  Reliability  Communications  Requires networking infrastructure.  Local area networks (LAN) or Wide area networks (WAN).



Q30. Difference between process and program?


What is the difference between process and program? 1) Both are same beast with different name or when this beast is sleeping (not executing) it is called program and when it is executing becomes process. 2) Program is a static object whereas a process is a dynamic object. 3) A program resides in secondary storage whereas a process resides in main memory. 4) The span time of a program is unlimited but the span time of a process is limited. 5) A process is an 'active' entity whereas a program is a 'passive' entity. 6) A program is an algorithm expressed in programming language whereas a process is expressed in assembly language or machine language.



Q31.Explain PCB


Information associated with each process. • Process state • Program counter • CPU registers • CPU scheduling information • Memory-management information • Accounting information • I/O status information Process state: The state may be new, ready, running, waiting, halted, and SO on. Program counter: The counter indicates the address of the next instruction to be executed for this process. CPU registers: The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward. CPU-scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. Memory-management information: This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the operating system. Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. status information: The information includes the list of I/O devices allocated to this process, a list of open files, and so on. The PCB simply serves as the repository for any information that may vary from process to process.



Q32 Explain schedular.


A scheduler is a decision maker that selects the processes from one scheduling queue to another or allocates CPU for execution. The Operating System has three types of scheduler: 1. Long-term scheduler or Job scheduler 2. Short-term scheduler or CPU scheduler 3. Medium-term scheduler Long-term scheduler or Job scheduler  The long-term scheduler or job scheduler selects processes from discs and loads them into main memory for execution. It executes much less frequently.  It controls the degree of multiprogramming (i.e., the number of processes in memory).  Because of the longer interval between executions, the long-term scheduler can afford to take more time to select a process for execution. Short-term scheduler or CPU scheduler  The short-term scheduler or CPU scheduler selects a process from among the processes that are ready to execute and allocates the CPU.  The short-term scheduler must select a new process for the CPU frequently. A process may execute for only a few milliseconds before waiting for an I/O request. Medium-term scheduler The medium-term scheduler schedules the processes as intermediate level of scheduling Processes can be described as either: ✦I/O-bound process – spends more time doing I/O than computations, many short CPU bursts. ✦CPU-bound process – spends more time doing computations; few very long CPU bursts.



Q33 Explain cooperating process and its advantage.


The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Clearly, any process that does not share any data (temporary or persistent) with any other process is independent. On the other hand, a process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process. Advantages of process cooperation Information sharing: Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to these types of resources. Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Such a speedup can be achieved only if the computer has multiple processing elements (such as CPUS or I/O channels). Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads Convenience: Even an individual user may have many tasks on which to work at one time. For instance, a user may be editing, printing, and compiling in parallel. Concurrent execution of cooperating processes requires mechanisms that allow processes to communicate with one another and to synchronize their actions.



Q34. Benefit of multithreading.


Benefits The benefits of multithreaded programming can be broken down into four major categories: 1. Responsiveness: Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded web browser could still allow user interaction in one thread while an image is being loaded in another thread. 2. Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. 3. Economy: Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of the process to which they belong, it is more economical to create and context switch threads. It can be difficult to gauge empirically the difference in overhead for creating and maintaining a process rather than a thread, but in general it is much more time consuming to create and manage processes than threads. In Solaris 2, creating a process is about 30 times slower than is creating a thread, and context switching is about five times slower. 4. Utilization of multiprocessor architectures: The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A singlethreaded process can only run on one CPU, no matter how many are available. Multithreading on a multi-CPU machine increases concurrency. In a single processor architecture, the CPU generally moves between each thread so quickly as to create an illusion of parallelism, but in reality only one thread is running at a time.



Q35 Kernel level thread advantage and disadvantage.


In this method, the kernel knows about and manages the threads. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. Operating Systems kernel provides system call to create and manage threads. Advantages: • Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. • Kernel-level threads are especially good for applications that frequently block. Disadvantages: • The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads. Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.



Q36 Explain critical section problem.


Consider a system consisting of n processes {Po,P1, ..., Pn-1). Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. do{ Entry section Critical section Exit section Remainder section }while(1); A solution to the critical-section problem must satisfy the following three requirements: 1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely. 3. Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.



Q37 Binary semaphore.


The semaphore construct described in the previous sections is commonly known as a counting semaphore, since its integer value can range over an unrestricted domain. A binary semaphore is a semaphore with an integer value that can range only between 0 and 1. A binary semaphore can be simpler to implement than a counting semaphore, depending on the underlying hardware architecture. We will now show how a counting semaphore can be implemented using binary semaphores. Let S be a counting semaphore. To implement it in terms of binary semaphores we need the following data structures: binary-semaphore S1, S2; int C; Initially S1 = 1, S2 = 0, and the value of integer C is set to the initial value of the counting semaphore S. The wait operation on the counting semaphore S can be implemented as follows: wait (S1) ; C--; i f (C < 0) { signal(S1) ; wait (S2) ; } signal(S1); The signal operation on the counting semaphore S can be implemented as follows: w a i t (S1) ; C++ ; i f (C <= 0) signal (S2) ; e l s e signal (S1) ;



Q38 Necessary Conditions for Deadlock


Coffman (1971) identified four conditions that must hold simultaneously for there to be a deadlock. 1. Mutual Exclusion Condition: The resources involved are non-shareable. Explanation: At least one resource must be held in a non-shareable mode, that is, only one process at a time claims exclusive control of the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and Wait Condition: Requesting process hold already the resources while waiting for requested resources. Explanation: There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes. 4. No-Preemptive Condition: Resources already allocated to a process cannot be preempted. Explanation: Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it. 4. Circular Wait Condition: The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list. There exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.



Q39 Logical- Versus Physical-Address Space


An address generated by the CPU is commonly referred to as a logical address or a virtual address whereas an address seen by the main memory unit is commonly referred to as a physical address. ⇒ The set of all logical addresses generated by a program is a logical-address space whereas the set of all physical addresses corresponding to these logical addresses is a physicaladdress space. ⇒ Logical and physical addresses are the same in compile-time and load-time addressbinding schemes; logical (virtual) and physical addresses differ in execution-time addressbinding scheme. ⇒ The Memory Management Unit is a hardware device that maps virtual to physical address. In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.



Q40 Explain swapping.


A process can be swapped temporarily out of memory to a backing store (large disc), and then brought back into memory for continued execution. ⇒ Roll out, roll in: A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher-priority process arrives and wants service, the memory manager can swap out the lower-priority process so that it can load and execute the higher-priority process. When the higher-priority process finishes, the lower-priority process can be swapped back in and continued. This variant of swapping is called roll out, roll in. ⇒ Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. ⇒ Modified versions of swapping are found on many systems (UNIX, Linux, and Windows).



Q41 Explain virtual memory.


Virtual memory is a technique that allows the execution of processes that may not be completely in memory. Only part of the program needs to be in memory for execution. It means that Logical address space can be much larger than physical address space. Virtual memory allows processes to easily share files and address spaces, and it provides an efficient mechanism for process creation. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. Virtual memory makes the task of programming much easier, because the programmer no longer needs to worry about the amount of physical memory available.



Q42 Explain thrashing


The system spends most of its time shuttling pages between main memory and secondary memory due to frequent page faults. This behavior is known as thrashing. A process is thrashing if it is spending more time paging than executing. This leads to: low CPU utilization and the operating system thinks that it needs to increase the degree of multiprogramming



Q43 Memory allocation and ways to allocate memory.


The main memory must accommodate both the operating system and the various user processes. We need to allocate different parts of the main memory in the most efficient way possible. The main memory is usually divided into two partitions: one for the resident operating system, and one for the user processes. We may place the operating system in either low memory or high memory. The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well. There are following two ways to allocate memory for user processes: 1. Contiguous memory allocation 2. Non contiguous memory allocation



Q44 Explain dynamic loading and dynamic linking.


Dynamic Loading ⇒ It loads the program and data dynamically into physical memory to obtain better memory-space utilization. ⇒ With dynamic loading, a routine is not loaded until it is called. ⇒ The advantage of dynamic loading is that an unused routine is never loaded. ⇒ This method is useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. ⇒ Dynamic loading does not require special support from the operating system. Dynamic Linking ⇒ Linking postponed until execution time. ⇒ Small piece of code (stub) used to locate the appropriate memory-resident library routine. ⇒ Stub replaces itself with the address of the routine and executes the routine. ⇒ Operating system needed to check if routine is in processes memory address. ⇒ Dynamic linking is particularly useful for libraries.



Q45 Binding of Instructions and Data to Memory


Address binding of instructions and data to memory addresses can happen at three different stages. 1. Compile time: The compile time is the time taken to compile the program or source code. During compilation, if memory location known a priori, then it generates absolute codes. 2. Load time: It is the time taken to link all related program file and load into the main memory. It must generate relocatable code if memory location is not known at compile time. 3. Execution time: It is the time taken to execute the program in main memory by processor. Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers).



Q46.RECOVERY FROM DEADLOCK


When a detection algorithm determines that a deadlock exists, then the system or operator is responsible for handling deadlock problem. There are two options for breaking a deadlock. 1. Process Termination 2. Resource preemption 3. Process Termination There are two method to eliminate deadlocks by terminating a process as follows: 1. Abort all deadlocked processes: This method will break the deadlock cycle clearly by terminating all process. This method is cost effective. And it removes the partial computations completed by the processes. 2. Abort one process at a time until the deadlock cycle is eliminated: This method terminates one process at a time, and invokes a deadlock-detection algorithm to determine whether any processes are still deadlocked. Resource Preemption In resource preemption, the operator or system preempts some resources from processes and give these resources to other processes until the deadlock cycle is broken. If preemption is required to deal with deadlocks, then three issues need to be addressed: 1. Selecting a victim: The system or operator selects which resources and which processes are to be preempted based on cost factor. 2. Rollback: The system or operator must roll back the process to some safe state and restart it from that state. 3. Starvation: The system or operator should ensure that resources will not always be preempted from the same process?



Q47 Explain monitor.


Another high-level synchronization construct is the monitor type. A monitor is characterized by a set of programmer-defined operators. The representation of a monitor type consists of declarations of variables whose values define the state of an instance of the type, as well as the bodies of procedures or functions that implement operations on the type. The syntax of a monitor is . The representation of a monitor type cannot be used directly by the various processes. Thus, a procedure defined within a monitor can access only those variables declared locally within the monitor and its formal parameters. Similarly, the local variables of a monitor can be accessed by only the local procedures. The monitor construct ensures that only one process at a time can be active within the monitor. Consequently, the programmer does not need to code this synchronization constraint explicitly. However, the monitor construct, as defined so far, is not sufficiently powerful for modeling some synchronization schemes. For this purpose, we need to define additional synchronization mechanisms. These mechanisms are provided by the condition operations construct. A programmer who needs to write her own tailor-made synchronization scheme can define one or more variables of type condition: condition x, y; The only operations that can be invoked on a condition variable are wait and signal. The operation means that the process invoking this operation is suspended until another process invokes The x . signal (1 operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect; that is, the state of x is as though the operation were never executed (Figure 7.21). Contrast this operation with the signal operation associated with semaphores, which always affects the state of the semaphore. Now suppose that, when the x. signal () operation is invoked by a process P, there is a suspended process Q associated with condition x. Clearly, if the operations suspended process Q is allowed to resume its execution, the signaling process P must wait. Otherwise, both P and Q will be active simultaneously within the monitor. Note, however, that both processes can conceptually continue with their execution. Two possibilities exist: 1. P either waits until Q leaves the monitor, or waits for another condition. 2. Q either waits until P leaves the monitor, or waits for another condition. There are reasonable arguments in favor of adopting either option 1 or option 2. Since P was already executing in the monitor, choice 2 seems more reasonable. However, if we allow process P to continue, the "logical" condition for which Q was waiting may no longer hold by the time Q is resumed. Choice 1 was advocated by Hoare, mainly because the preceding argument in favor of it translates directly to simpler and more elegant proof rules. A compromise between these two choices was adopted in the language Concurrent C. When process P executes the signal operation, process Q is immediately resumed. This model is less powerful than Hoare's, because a process cannot signal more than once during a single procedure call.



Q48 Explain concurrent processes.


Concurrent Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system.Clearly, any process that does not share any data (temporary or persistent) with any other process is independent. On the other hand, a process is cooperating if it can affect or be affected by the Compiled by Pratap Kumar Sahu(GEC) Page 30 other processes executing in the system.Clearly, any process that shares data with other processes is a cooperating process. We may want to provide an environment that allows process cooperation for several reasons: Information sharing: Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to these types of resources. Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of wluch will be executing in parallel with the others. Such a speedup can be achieved only if the computer has multiple processing elements (such as CPUS or I/O channels). Modularity: We may want to construct the system in a modular fashion,dividing the system functions into separate processes or threads. Convenience: Even an individual user may have ma



Q49 Process and process state


Process Concept • Informally, a process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity, as represented by the value of the program counter and the contents of the processor's registers. In addition, a process generally includes the process stack, which contains temporary data (such as method parameters, return addresses, and local variables), and a data section, which contains global variables. • An operating system executes a variety of programs: ✦ Batch system – jobs ✦ Time-shared systems – user programs or tasks • Process – a program in execution; process execution must progress in sequential fashion. • A process includes: program counter , stack, data section Process State As a process executes, it changes state  New State: The process is being created.  Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.  Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.  Ready State: A process is said to be ready if it needs a CPU to execute. A ready state process is runnable but temporarily stopped running to let another process run.  Terminated state: The process has finished execution.



Q50 Virtual machine.


• A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system kernel as though they were all hardware. • A virtual machine provides an interface identical to the underlying bare hardware. • The operating system creates the illusion of multiple processes, each executing on its own processor with its own (virtual) memory. • The resources of the physical computer are shared to create the virtual machines. ✦ CPU scheduling can create the appearance that users have their own processor. ✦ Spooling and a file system can provide virtual card readers and virtual line printers. ✦ A normal user time-sharing terminal serves as the virtual machine operator’s console.



Post a Comment

0 Comments