This assignment is due before 11:00PM on Wednesday, September 20, 2017. There are two parts of the homework:
Operating systems like iOS, Unix, and Windows use a scheduler to determine which apps should be running on the CPU. A scheduler must decide what order to run processes in, using a policy that is based on information like arrival time, a priority value, or the amount of time that a process has already used.
The goals of this part of the assignment are:
A deque (pronounced deck) is a double-ended queue. It combines the functionality of queues and stacks. Queues typically only support adding to the back (enqueueing) and deleting from the front (dequeueing). Stacks support adding to the front (pushing) and deleting from the front (popping). A deque allows you to add and remove items to either the front or the back.
You should implement the following API:
This API is a simplification of Java’s
Deque interface. Notice that your implementation should support adding, removing, and peeking of null elements.
You will implement your deque using array resizing, as discussed in class. Inside
Deque.java, you should have an array for storing the elements of your deque. The array should start with a capacity of 2 elements, and it should double in size (resize up) every time it fills up. That is, whenever the addition of a new element would cause the deque to exceed the size of the array. Whenever the removal of an element would cause the number of elements in the deque to fall to less than one quarter of the size of the array, you should cut the size of the array in half (resize down).
You should write an efficient implementation of resizingArray. For instance, add, delete functions should not need to copy the entire array every time. Most operations should be in-place meaning that there’s no need for additional space.
For examples of how to implement a resizing array, check out the textbook code for the
ResizingArrayStack or the
ResizingArrayQueue. Remember that you are NOT ALLOWED to copy the code exactly (we will check) but it should help you get started.
Your implementation should use wraparound. After a series of pushes and pops, you may have a case where the head of your deque is not at array index 0, but maybe at an index farther along the array (like array index 4). After some more pushes, the index of the tail of your deque will get to the end of the array. The next push should then be at index 0, then index 1, etc.
This homework asks you implement an
Iterator. To learn more about iterators, you should read the textbook pages 138–9, and the Javadocs for the
Iterator interface. You do not need to implement
remove() for your iterators. In Java 8
remove() is a default method that throws an
UnsupportedOperationException. Your iterator should be private inner class in your
Deque class. Your
iterator() method should return an instance of the private inner class.
Deque<Item> is generic, so your implementation must also be generic; therefore the underlying array of elements must be of type
Item. Unfortunately, Java doesn’t cleanly support generic arrays, so the code to initialize a generic array of type
Item of size 2 in Java is
When you implement any method that removes an element from the array (such as
removeLast()), you should assign null to associated entry in the array. If you do not mark the element removed from the array as null, then references to that element will loiter in the array. This means that Java’s garbage-collection mechanism won’t be able to clean up that memory, since there are still have references to an object. The loitering references will prevent the reclamation of the memory that they use.
Your code should throw a
NoSuchElementException when the deque is empty and the user tries to remove or peek at a non-existent item. As for the Iterator returned by your iterator() function, it should throw an UnsupportedOperationException if remove() is called, and throw a NoSuchElementException() if next() is called when no more element remains to be iterated.
Next, you will implement a scheduler using your
Deque. Our scheduler will be given a series of processes. When a process is submitted its request time is the number of clock cycles that have already been simulated. This is a model of what its like when a new app is launched or a background job is started, and the operating system needs to determine which jobs should be given time on the CPU.
Processes are passed to your scheduler as an instance of
Integer.MAX_VALUE. A process with a low priority value is more urgent. For example, a process with priority 0 has more urgent than all other processes whose priority is greater than 0.
You are not required to submit your
Process class, but you should create one for testing purposes.
Our scheduler can only handle one process running on the CPU at a time, and does not allow multiple processes to be run concurrently. Because of this, we will represent the history of which processes were running at a given time as an integer array: The indices of the array represent the current clock cycle, and the values represent the PID of the process running at that time. If a process takes n clock cycles to run, it should take up n indices somewhere within this history. The locations a given process appears at in this history depend on the algorithm we use.
If no process is running at a given time, you should enter the value −1 in the timeline at the corresponding location. You will create this array in
getProcessTimeline(). The last index of the array should be the last time a process was running at.
The scheduler receives processes through the submit() method, which takes in a Process and considers the time of submission to be the number of cycles that the scheduler has already simulated.
Your scheduler class should have a constructor which takes in a
SchedulingAlgorithm enum, and uses it to decide which algorithm to use. You will implement 3 scheduling algorithms:
You should implement a
Scheduler class with the following API.
SchedulingAlgorithm is an enumeration with this definition:
You should keep this in a separate file. Please do not include it in your
This simple algorithm runs to complete the processes in the order they were received, regardless of how long any individual process takes. When a process arrives, it is sent to the tail of the queue of processes waiting to be run. Consider two processes that were submitted to the scheduler:
A call to
getProcessTimeline() will produce the array shown below:
For each of the algorithms you will implement, your Scheduler should act as though it receives a process at its given request time - In no case should a process be executed before its request time. Note above how p1 is not executed until it arrives at time t = 2.
Additionally, observe how p2 waits for p1 to complete before it begins executing. When no process is running or waiting to be run, the array should contain the value −1 at that index. You should use your
Deque to represent the queue of processes waiting to be run in this and future algorithms.
Often it is beneficial to avoid running processes for too much time, and to give other processes a chance to run before the previous process completes. In Round Robin, we initially execute processes in the order we encountered them in, as with FIFO.
This algorithm only allows processes to be run for a set number of cycles dictated by the integer time- Quantum. After a process runs for
timeQuantum cycles, it is moved to the tail of the process queue.
Consider three processes that were submitted to the scheduler, with timeQuantum = 2:
A call to getProcessTimeline() will produce the array shown below:
p1 (with run time 3) cannot be fully executed within the
timeQuantum. So after
timeQuantum has elapsed, we insert it at the tail of the process queue, maintaining its remaining run time. In this case, p1 has 1 clock cycle remaining. Then, p2 has a chance to run. Because its run time is smaller than
timeQuantum, it completes before it is forced to the back of the queue. Since p3 arrives after p1 has been added to the tail of the queue, it runs after p1 gets its second chance to execute.
This scheduling algorithm is fairly self-explanatory. It will always attempt to run the process with the smallest run time first, regardless if there is another process currently being run. If a process with a run time of 3 arrives while one with 4 cycles of run time remaining is currently being executed, the new process will replace the current one. When a process completes, the process with the next smallest run time is executed.
Consider three processes that were submitted to the scheduler:
p1 arrives at t=0, and is executed as it is the only process waiting to be run. At t=1, p2, with 2 run time arrives, which is less than p1’s remaining 3 run time, so p2 replaces p1 and begins execution. Now, at time t = 2, a process p3 with run time 2 arrives. Since p2 only has 1 clock cycle of execution time remaining, less than p3’s 2, it continues running. Once p2 completes, the next shortest process, p3 is fully processed, followed by p1.
We have established the notion that some processes are more important to run than others. This notion is encapsulated in the idea of priorities. In a real operating system, a process’s priority is a combination of what is assigned to it by the operating system itself as well as an optional priority that the user may assign. For our final algorithm, Multilevel Queues, we aim to generalize Round Robin while also factoring in a process’s priority, resulting in an algorithm that is more flexible than the previous ones in scheduling processes.
You will use the
getPriority() method of
Process to get a process’s priority. Somewhat unintuitively, the lower the value of
the greater the corresponding process’s “importance” and hence the sooner our algorithm will run it. Priority ranges between 0 and
In Round Robin, once a process elapses
timeQuantum cycles of run time, it is sent to the tail of the
process queue. In Multilevel Queues, however, it is sent to the tail of the process queue of one lower
priority. So, if a process p1 with priority 0 runs out of time on its current queue, it is inserted at the tail of the priority 1 queue.
In Multilevel Queues, the priority value associated with each process represents the initial queue the process is sent to. As such, you may have to store many different priority queues simultaneously. Similar to Round Robin and FIFO, when a process arrives, it is inserted at the tail of its corresponding queue. You should always run the highest priority process available. When a queue of a given priority is empty, begin executing processes at the next highest priority queue. Again, if multiple new processes arrive in the same queue at the same time, we consider the one submitted first as having arrived earlier.
For simplicity, we will only consider the queues with priority 0, 1, and 2. If a process elapses
timeQuantum while in the queue of priority 2, it should be reinserted at the tail of the priority 2 queue.
If a process on a low priority queue is running when a higher priority event arrives, your scheduler should wait until
timeQuantum elapses executing the current process before executing the higher priority process. You should not cut the execution time of any process short, as you did in Shortest Job First.
Consider four processes that were submitted to the scheduler, with
timeQuantum = 2:
After simulating up until t=10, a call to
getProcessTimeline() will produce the array shown below:
Process p1 arrives and is added to queue of priority 0, where it runs for 2 clock cycles before dropping a priority level. Then it arrives to the priority 1 queue, which it enters ahead of p2. So p1 runs another two cycles and completes.
While it is running, p3 arrives in queue 2.
Now, at t=4, p4 arrives in queue 0, where it becomes the highest priority process available. So, we run p4 for
timeQuantum, where it completes.
Then, in queue 1, p2 runs for
timeQuantum=2 clock cycles before dropping to the tail of the priority 2 queue. From there, p3 runs to completion, and then p2 finishes processing.
Scheduler should throw
IllegalArgumentException if the
numCycles is less than or equal to 0, and if it is passed a
Process which has a runtime of less than or equal to 0.
If you do not implement the multi-level queue, then throw an
UnsupportedOperationException when the Scheduler is passed a
MultilevelQueue. If you do implement the multi-level queue, make sure to throw an
IllegalArgumentException if the priority is less than 0. In addition, if you pass in either RoundRobin or MultilevelQueue into the constructor without a time quantum, then you should throw an
Beginning with this homework assignment, we will ask you to submit your unit tests along with your code. In addition to submitting your
Deque.java implementation and your
Scheduler.java, you will also submit unit tests:
Please read the following resources that explain unit tests:
Files to submit:
Files to submit:
The goals of this assignment are to test your understanding of the material covered in sections 1.3 and 1.4 of the textbook, and the lecture and recitation materials. You should read the textbook chapters before doing this part of the assignment.
Written homework assignments must be typeset in LaTeX and submitted in PDF format. Please insert a page break between each question, so that your answer to each question starts on a new page in your PDF document.
What does the following code fragment do to the queue q?
Explain the difference between analyzing the worst case running time of an operation and amortized analysis.
Prove that, starting from an empty stack, the number of array accesses used by any sequence of operations of length M in the resizing array implementation of a stack is proportional to M.
Suppose we have a resizing array that increases in size by entries when the array is full, and decreases in size by entries when the array has empty entries. Show that the push and pop operations each take amortized time (where is the number of operations, as in the previous problem) for some worst case sequence. Give an example of a worst case sequence. Observe that this results in time for operations.