CSE 522S: Studio 8

Completely Fair Scheduler (CFS)


Hobbits give presents to other people on their own birthdays. Not very expensive ones, as a rule, but it was not a bad system. Actually in Hobbiton and Bywater every day in the year it was somebody's birthday, so that every hobbit in those parts had a fair chance of at least one present at least once a week.

The Fellowship of the Ring, Book 1, Chapter 1

Linux's CFS strives to be completely fair, meaning that all processes should recieve an equal share of the processor time. In particular, if there are a total of N processes in the system, then all processes will ideally recieve exactly 1/Nth of the processor time. In this studio we will explore the CFS from user space.

In this studio, you will:

  1. Create workloads to influence the scheduler
  2. Analyze the scheduler from userspace

Please complete the required exercises below, as well as any optional enrichment exercises that you wish to complete.

As you work through these exercises, please record your answers, and when finished email your results to eng-cse522@email.wustl.edu with the phrase CFS Studio in the subject line.

Make sure that the name of each person who worked on these exercises is listed in the first answer, and make sure you number each of your responses so it is easy to match your responses with each exercise.


Required Exercises

  1. As the answer to the first exercise, list the names of the people who worked together on this studio.

  2. Before we begin we need to disable a certain scheduling heuristic that influences the exercises in this studio. Change the value of the file /proc/sys/kernel/sched_autogroup_enabled from 1 to 0. You can do this with the command echo 0 > sched_autogroup_enabled, but you will probably have to have a root terminal (sudo bash) in order to modify the value. If we did not do this, then the way that you invoked the exercises in this studio would influence your results.

  3. Now we will create a CPU-bound workload. The simplest CPU-bound program is a while(1) loop, but we want a little control over where this task executes. To do this, we will use Linux's processor affinities.

    For this exercise, write a infinite-loop program that takes one integer argument which gives the processor that the program should execute upon. In order to control where your process is allowed to run you can use the function sched_setaffinity(). To use this function you will need to specify the set of allowable CPUs with a variable of type cpu_set_t. In order to manipulate this data type you should use the functions documented in man CPU_SET. This is a nonstandard extension, so you will need to place #define _GNU_SOURCE before you include sched.h.

    Verify that your program runs continuously on a processor of your choosing with the top or trace-cmd commands.

  4. Your Raspberry Pi has four processors, and the Linux convention is to number them starting with zero. Create an infinite loop task on each of processors 0, 1, 2, and 3 to fully occupy your system. We will call these tasks the background workload.

    Take a moment and marvel at the fact that your system hasn't come crashing to a halt. Do a few tasks of your own choosing (text editing, internet browsing, etc.) and make a subjective judgement of how well the system responds now versus before you started the infinite loop tasks.

    As the answer to this question, describe your experience.

  5. Now we'll use the dense_mm program from previous studios to examine the behavior of CPU-bound tasks on a heavily loaded system. First, kill your background workload and use the command time ./dense_mm 300 to get a rough measure of program execution time on a quiet system. Now, restart the background tasks and run the same command again.

    As the answer to this exercise, copy and paste each response.

  6. Compare the real and user timings. What does the previous exercise tell you about the way that two CPU-bound tasks share a processors under the CFS?

  7. What do you think will happen to the real and user time of dense_mm 300 if you were to increase the number of background tasks?

  8. Create new background tasks so that each processor contains two infinte loop tasks. Execute the command time ./dense_mm 300 and copy and paste the results here. What happened?

    Two helpful tips. You can detach a command from the terminal by executing it with an ampersand (&) at the end. Also, you can kill all tasks matching a a certain string with the pkill command. For example, if I named my infinite loop task while, then I could create multiple tasks on processor zero with the command ./while 0 &. When I was done, I could kill all outstanding tasks with the command pkill while.

  9. Reset your system so that only one background task exists on each processor. The CFS scheduler doesn't have a direct notion of timeslice or priority, but Linux's nice priorities do influence the proportion of time a task recieves. Run the command time sudo nice -n -20 ./dense_mm 300. What proportion of time (user divided by real) did the task recieve?

  10. Repeat the previous exercise for nice priorities -10, -5, 0, 5, 10, and 19. You probably don't want to wait for 19 to finish, so stop the task after a while with CTRL-C. Compute their runtime proportions.

  11. Plot the values from the last exercise on a graph. What does the function look like?

Things to turn in