Basic CS-Operating System [TCS Placement]: Sample Questions 14 - 15 of 35

Get unlimited access to the best preparation resource for competitive exams : get questions, notes, tests, video lectures and more- for all subjects of your exam.

Question 14

Operating System
Edit

Describe in Detail

Essay▾

Explain Belady՚s Anomaly

Explanation

  • Belady՚s anomaly is also called FIFO anomaly.
  • Usually, on increasing the number of frames allocated to a process virtual memory, the process execution is faster, because fewer page faults occur.
  • Sometimes, the reverse happens, i.e.. , the execution time increases even when more frames are allocated to the process, this is Belady՚s Anomaly.
  • This is true for certain page reference patterns.
  • The Belady՚s anomaly occurs in case of the FIFO page replacement policy in the OS.
  • When this FIFO is used and the number of page frames are increased in the frames that are required by the program varies in a large range (due to large no of pages) because of this the number of page faults increases with the number of frames.
  • Number of pages faults for four frames is more than the number of page faults for three frames.
  • This most unexpected result is known as Belady՚s anomaly.
  • For some page replacement algorithm, page fault may increase as the number of allocated frames increases.
Given the Image is Define the Belady՚s Anomaly with Example

Question 15

Operating System
Edit

Describe in Detail

Essay▾

Explain the popular multiprocessor thread-scheduling strategies

Explanation

  • Load Sharing
    • Processes are not assigned to a particular processor. A global queue of threads is maintained.
    • Each processor, when idle, selects a thread from this queue.
    • that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
    • Load balancing approaches attempt to equalize the workload on all the nodes of a system by gathering state information
    • Load sharing algorithms do not attempt to balance the average workload on all nodes; they only ensure that no node is idle or heavily loaded.
    • Policies for load sharing approach are the same as load balancing polices, they include load estimation policy, process transfer policy, location policy, and state information exchange and they differ in location policy.
  • Gang Scheduling
    • A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis.
    • Closely related threads/processes may be scheduled this way to reduce synchronization blocking, and minimize process switching.
    • Group scheduling predated this strategy.
    • Simultaneous scheduling of threads that make up a single process
    • Useful for applications where performance severely degrades when any part of the application is not running
    • Threads often need to synchronize with each other
  • Dedicated processor assignment
    • Provides implicit scheduling defined by assignment of threads to processors.
    • For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program.
    • Processors are chosen from the available pool.
    • When application is scheduled, its threads are assigned to a processor
    • Some processors may be idle
    • No multiprogramming of processors
  • Dynamic scheduling
    • The number of thread in a program can be altered during the course of execution.
    • Operating system adjust the load to improve utilization
      • Assign idle processors
      • New arrivals may be assigned to a processor that is used by a job currently using more than one processor
      • Hold request until processor is available
      • Assign processor a job in the list that currently has no processors (i.e. … , to all waiting new arrivals)
  • Load Sharing
    • Processes are not assigned to a particular processor. A global queue of threads is maintained.
    • Each processor, when idle, selects a thread from this queue.
    • that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
    • Load balancing approaches attempt to equalize the workload on all the nodes of a system by gathering state information
    • Load sharing algorithms do not attempt to balance the average workload on all nodes; they only ensure that no node is idle or heavily loaded.
    • Policies for load sharing approach are the same as load balancing polices, they include load estimation policy, process transfer policy, location policy, and state information exchange and they differ in location policy.
  • Gang Scheduling
    • A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis.
    • Closely related threads/processes may be scheduled this way to reduce synchronization blocking, and minimize process switching.
    • Group scheduling predated this strategy.
    • Simultaneous scheduling of threads that make up a single process
    • Useful for applications where performance severely degrades when any part of the application is not running
    • Threads often need to synchronize with each other
  • Dedicated processor assignment
    • Provides implicit scheduling defined by assignment of threads to processors.
    • For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program.
    • Processors are chosen from the available pool.
    • When application is scheduled, its threads are assigned to a processor
    • Some processors may be idle
    • No multiprogramming of processors
  • Dynamic scheduling
    • The number of thread in a program can be altered during the course of execution.
    • Operating system adjust the load to improve utilization
      • Assign idle processors
      • New arrivals may be assigned to a processor that is used by a job currently using more than one processor
      • Hold request until processor is available
      • Assign processor a job in the list that currently has no processors (i.e. … , to all waiting new arrivals)

Developed by: