Multi level feedback scheduling in operating system

0

 


Multi-Level Feedback Scheduling (MLFS) is a CPU scheduling algorithm that combines the concepts of both priority scheduling and round-robin scheduling. MLFS separates processes into different priority levels and allocates a certain amount of CPU time to each priority level. The algorithm is called “multi-level” because it consists of multiple priority levels and each priority level is assigned a different time quantum for CPU time allocation.

The idea behind MLFS is to give priority to processes with a high priority, and allow processes with a low priority to wait in a queue until the higher priority processes are finished. When a process with high priority has completed, the algorithm selects the next process from the lower priority level. The algorithm also allows processes with a low priority to be executed, but with a smaller time quantum compared to processes with a higher priority.

The MLFS algorithm can be implemented in two different ways: dynamic priority scheduling and aging.

Dynamic priority scheduling is where the priority of a process is dynamically updated based on its behavior. For example, a process that waits for a long time can be given a higher priority, while a process that uses a lot of CPU time can be given a lower priority.

Aging is where the priority of a process increases as it waits in the queue. The aging mechanism ensures that processes that have been waiting for a long time are eventually executed, even if they have a lower priority.

Implementation of Multi-Level Feedback Scheduling:

The MLFS algorithm is implemented by dividing the ready queue into different priority levels, with each priority level assigned a different time quantum. When a process arrives in the ready queue, it is assigned the highest priority level and is assigned the largest time quantum.

As the process uses the CPU, its priority level is reduced, and its time quantum is also reduced. When the time quantum of a process has been exhausted, the process is moved to a lower priority level, and a new process with a higher priority is selected for execution.

If all the processes in a priority level have completed their time quantum, the next priority level is selected, and the processes in that level are executed. This continues until all the processes have completed execution.

Example:

Consider a system with four processes (A, B, C, and D) and three priority levels (1, 2, and 3). The time quantum for each priority level is as follows:

  • Priority level 1: 10 units of time
  • Priority level 2: 8 units of time
  • Priority level 3: 6 units of time

When process A arrives in the ready queue, it is assigned the highest priority level (1) and is assigned a time quantum of 10 units of time.


Process Burst Time Priority Time Quantum
   A         20          1      10
   B         15          2       8
   C         25          3       6
   D         10          1      10

Process A uses the CPU for 10 units of time, and its burst time is reduced to 10 units. Its priority level is then reduced to 2, and its time quantum is reduced to 8 units of time.

Process B arrives in the ready queue and is assigned priority level 2 and a time quantum of 8 units of time.

Process B uses the CPU for 8 units of time, and its burst time is reduced to 7 units. Its priority level is then reduced to 3, and its time quantum is reduced to 6 units of time.

Process C arrives in the ready queue and is assigned priority level 3 and a time quantum of 6 units of time.

Process C uses the CPU for 6 units of time, and its burst time is reduced to 19 units. Its priority level remains at 3, and its time quantum remains at 6 units of time.

Process D arrives in the ready queue and is assigned priority level 1 and a time quantum of 10 units of time.

Process D uses the CPU for 10 units of time, and its burst time is reduced to 0 units. As process D has completed its execution, it is removed from the ready queue.

Process A is now assigned a time quantum of 8 units of time, and uses the CPU for 8 units of time. Its burst time is reduced to 2 units, and its priority level remains at 2.

Process B is now assigned a time quantum of 6 units of time, and uses the CPU for 6 units of time. Its burst time is reduced to 1 unit, and its priority level remains at 3.

Process C is now assigned a time quantum of 6 units of time, and uses the CPU for 6 units of time. Its burst time is reduced to 13 units, and its priority level remains at 3.

This process continues until all processes have completed execution.

Advantages of Multi-Level Feedback Scheduling:

  1. High Priority Processes are executed first: MLFS gives priority to processes with high priority, ensuring that important tasks are executed first. This helps to improve the overall system performance and responsiveness.
    1. Better CPU utilization: MLFS allows processes with lower priority to wait in the queue, ensuring that the CPU is not idle for long periods of time. This results in better utilization of the CPU and improved system performance.
    2. Fairness: MLFS ensures fairness by allowing processes with lower priority to be executed, even if they have been waiting for a long time. This prevents low priority processes from being starved of CPU time.
    3. Dynamic Adjustment: MLFS allows the priority of a process to be dynamically adjusted based on its behavior, ensuring that the system adapts to changing requirements.

      Disadvantages of Multi-Level Feedback Scheduling:

      1. Overhead: MLFS requires additional overhead to maintain multiple priority levels and time quantums, leading to increased system overhead.

      2. Complexity: MLFS can be complex to implement, as it involves multiple priority levels and time quantums, which can make the scheduling algorithm difficult to understand and implement.

      3. Inefficiency: MLFS can be inefficient for systems with a large number of processes, as the overhead of maintaining multiple priority levels and time quantums can lead to decreased performance.

      In conclusion, Multi-Level Feedback Scheduling is a useful CPU scheduling algorithm that combines the concepts of priority scheduling and round-robin scheduling. While it has some disadvantages, it offers several advantages, such as improved CPU utilization, fairness, and dynamic adjustment. However, it is important to carefully consider the overhead, complexity, and efficiency of MLFS before deciding to use it in a system.


Post a Comment

0Comments
Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !