TCS Papers: Sample Questions 347 - 347 of 502

Examrace Placement Series prepares you for the toughest placement exams to top companies.

Question number: 347

» Basic CS » Operating System

Essay Question▾

Describe in Detail

Explain the popular multiprocessor thread-scheduling strategies

Explanation

  • Load Sharing

    • Processes are not assigned to a particular processor. A global queue of threads is maintained.

    • Each processor, when idle, selects a thread from this queue.

    • that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.

    • Load balancing approaches attempt to equalize the workload on all the nodes of a system by gathering state information

    • Load sharing algorithms do not attempt to balance the average workload on all nodes; they only ensure that no node is idle or heavily loaded.

    • Policies for load sharing approach are the same as load balancing polices, they include load estimation policy, process transfer policy, location policy, and state information exchange and they differ in location policy.

  • Gang Scheduling

    • A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis.

    • Closely related threads/processes may be scheduled this way to reduce synchronization blocking, and minimize process switching.

    • Group scheduling predated this strategy.

    • Simultaneous scheduling of threads that make up a single process

    • Useful for applications where performance severely degrades when any part of the application is not running

    • Threads often need to synchronize with each other

  • Dedicated processor assignment

    • Provides implicit scheduling defined by assignment of threads to processors.

    • For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program.

    • Processors are chosen from the available pool.

    • When application is scheduled, its threads are assigned to a processor

    • Some processors may be idle

    • No multiprogramming of processors

  • Dynamic scheduling

    • The number of thread in a program can be altered during the course of execution.

    • Operating system adjust the load to improve utilization

      • Assign idle processors

      • New arrivals may be assigned to a processor that is used by a job currently using more than one processor

      • Hold request until processor is available

      • Assign processor a job in the list that currently has no processors (i. e. , to all waiting new arrivals)

  • Load Sharing

    • Processes are not assigned to a particular processor. A global queue of threads is maintained.

    • Each processor, when idle, selects a thread from this queue.

    • that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.

    • Load balancing approaches attempt to equalize the workload on all the nodes of a system by gathering state information

    • Load sharing algorithms do not attempt to balance the average workload on all nodes; they only ensure that no node is idle or heavily loaded.

    • Policies for load sharing approach are the same as load balancing polices, they include load estimation policy, process transfer policy, location policy, and state information exchange and they differ in location policy.

  • Gang Scheduling

    • A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis.

    • Closely related threads/processes may be scheduled this way to reduce synchronization blocking, and minimize process switching.

    • Group scheduling predated this strategy.

    • Simultaneous scheduling of threads that make up a single process

    • Useful for applications where performance severely degrades when any part of the application is not running

    • Threads often need to synchronize with each other

  • Dedicated processor assignment

    • Provides implicit scheduling defined by assignment of threads to processors.

    • For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program.

    • Processors are chosen from the available pool.

    • When application is scheduled, its threads are assigned to a processor

    • Some processors may be idle

    • No multiprogramming of processors

  • Dynamic scheduling

    • The number of thread in a program can be altered during the course of execution.

    • Operating system adjust the load to improve utilization

      • Assign idle processors

      • New arrivals may be assigned to a processor that is used by a job currently using more than one processor

      • Hold request until processor is available

      • Assign processor a job in the list that currently has no processors (i. e. , to all waiting new arrivals)