Suppose a single-core system is running three processes, A and B and C. Then the following sequence of events occur in the following order:
Question 1 (1 points): (see above) During this sequence, what is the minimum number of process context switches that must occur? (Write your answer a a base-10 integer.)
Question 2 (1 points): (see above) In a Unix-like operating system (with a monolithic kernel design), which
of the following operations (from the sequence above) most likely would require
starting a system call?
Select all that apply
Question 3 (1 points): Suppose an xv6 process has an invalid value for its stack pointer register while it is running in user mode.
What are expected consequence of this?
Select all that apply
Question 4 (1 points): When an exception occurs, processors will usually _____. (Include all types of exceptions, including what some sources call interrupts, traps, faults, etc.)
Select all that apply
Question 1 (1 points): Consider the following C code that uses the POSIX API:
write(STDOUT_FILENO, "A", 1);
pid_t pid = fork();
write(STDOUT_FILENO, "B", 1);
if (pid == 0) {
write(STDOUT_FILENO, "C", 1);
exit(0);
} else if (pid > 0) {
write(STDOUT_FILENO, "D", 1);
waitpid(pid, NULL, 0);
write(STDOUT_FILENO, "E", 1);
}
Ignore any minor syntax errors above, assume all appropriate headers are included and that
none of fork()
, write()
, and waitpid()
fail.
Two possible outputs of the above code are "ABDBCE" and "ABCBDE". Write another.
Question 2 (1 points): Immediately after a call to fork()
, what is true?
Select all that apply
Question 3 (1 points): Consider the following Unix-style shell command
./A B > C
The following is some partial C code using the POSIX API that would execute that command:
const char *args[10] = {"./A", "B", NULL, NULL};
pid_t pid = fork();
if (pid < 0) {
handleError();
} else if (pid > 0) {
waitpid(pid, NULL, 0);
} else {
______________________
______________________
execv("./A", args);
}
Ignore any minor syntax errors above, missing error handling, and assume all appropriate headers are included.
Which of the following would be best to place in the blanks above to complete the code? (Ignore minor syntax errors and missing error handling.)
For the following questions, consider the following Unix-style shell pipeline:
./A | ./B | ./C
When this is run, the commands ./A
, ./B
, and ./C
are run in parallel,
the output of ./A
becomes the input to ./B
, and the output of ./B
becomes
the input to ./C
.
Assume that the shell uses fork()
to create new processes and
pipe()
to allow the new processses to receive other process's
output as input.
Question 4 (1 points): (see above) When executing the above shell pipeline, how many times should the shell
call fork()
or an equivalent function?
(Write your answer as a base-10 number like "15".)
Question 5 (1 points): (see above) When executing the above shell pipeline, how many times should the shell
call pipe()
or an equivalent function?
(Write your answer as a base-10 number like "15".)
Question 1 (1 points): In a Unix-like system, which of the
following are attributes of a thread as opposed to a process?
Select all that apply
Question 2 (1 points): Which of the following changes to xv6's scheduler would be likely to lower
overhead?
Select all that apply
Question 3 (1 points): A first-come first-served scheduler will often be better than
a round-robin scheduler at providing which of the following?
Select all that apply
Consider the following schedule:
Question 4 (1 points): (see above) What is the total turnaround time experienced by thread B between time 0 and 11? (Write your answer as a base-10 number like "15".)
One source of confusion on this question was mixing up "waiting for I/O" with "wait time". In the context of CPU scheduling, "wait time" refers to time that a thread could be run on the CPU but is not — so time spent waiting for I/O would not be included.
Question 5 (1 points): (see above) What is the total turnaround time experienced by thread C between time 0 and 11? (Write your answer as a base-10 number like "15".)
Question 6 (1 points): Without preemption, shortest job first does not minimize mean turnaround time.
In fact, in some corner cases, it fails to even achieve the minimum mean turnaround time
possible without preemption (and on a single core).
One reason for this is that the shortest job first does not consider schedules that leave the processor
idle in order to wait for a short job.
Which of the following are examples of such scenarios?
Select all that apply
Suppose a system uses a (preemptive) multi-level feedback queue scheduler to decide when threads get access to a single core. This scheduler divides threads into five priorities:
And the scheduler increases the priority of threads (if possible) whenever they are not preempted and do not use their entire time slice and decreases the priority of a thread whenever it uses its entire time slice and still requires more computation.
This system is running four threads, each of which have the following CPU usage patterns which are repeated indefinitely:
For the following questions, assume context switch and other scheduling overheads are neglibile and all threads have been running for a while.
Question 1 (1 points): (see above) Thread B1 will have priority
Question 2 (1 points): (see above) When thread C completes a disk I/O, it is possible that it will need to wait for other threads to perform computations before it will be allowed to start performing its 8.5 milliseconds of computation.
Which of the following is the maximum amount of time that it might need to wait to start its computation? If the true answer is on a boundary (e.g. 1.1ms) then we will accept either of the two answers that touch the boundary as correct (e.g. both "less than 1.1 ms" and "between 1.1 ms and 1.8 ms" for 1.1ms).
Question 3 (1 points): An alternative to the multi-level feedback queue style of designs for approximating SRTF is to try to record the length of CPU bursts, and use the average of recent CPU burst times to make a scheduling decisions.
In order to perform the task of a recording the length of CPU bursts, an operating system can time how long a process runs each time it is scheduled. This will result in a sequence of timings, one for each time the thread is scheduled on a core. The length of each CPU burst (suitable for approximating SRTF) is equal to _____
Question 4 (1 points): A scheduler that approximates SRTF, like a multi-level feedback queue design, will tend to favor interactive programs that perform short CPU bursts. Linux's Completely Fair Scheduler sometimes does so, but since it has a different scheduling goal, it often wil not. In which of the following scenarios is CFS likely to result in much worse performance for the interactive thread than an MLFQ-like scheduler that approximates SRTF?
(Assume every thread has the same weight in CFS. Ignore CFS features we did not discuss in class, like handling multiple cores.)
Select all that apply
Question 5 (1 points): Consider the following C code that uses the POSIX threading API:
pthread_t a, b;
struct point { int x, y; };
void *bar(void *arg) {
struct point *p = (struct point *) arg;
printf("%d\n", p->x);
}
void *foo(void *arg) {
struct point *initial = (struct point *) arg;
struct point new;
new.x = initial->x; new.y = initial->y;
pthread_create(&b, NULL, bar, (void*) &new);
pthread_join(b, NULL);
}
int main(void) {
struct point *main_pt = malloc(sizeof(struct point));
main_pt->x = 100; main_pt->y = 120;
pthread_create(&a, NULL, foo, main_pt);
pthread_join(a, NULL);
free(main_pt);
}
Ignore any minor syntax errors above, assume all appropriate headers are included and that the pthread functions do not fail.
When this code is run p->x
in bar
most likely will access _____?
Consider a multicore system where each core has its own cache. Consistency between these caches is maintained using a shared bus and a cache coherency protocol where each cache block is either in a modified, shared, or invalid state like we discussed in lecture. The implementation of this protocol tries to maximize efficiency by:
On this system, cache blocks are 256 bytes, so addresses 0x00 through 0xFF are in the one cache block, as are 0x100 through 0x1FF, 0x200 through 0x2FF, etc.
Suppose all cores initially have empty caches and perform single-byte reads and writes as follows, in this order:
Assume at no point are cache blocks evicted from the cache other than due to invalidations to ensure consistency with other cores.
Question 1 (1 points): (see above) What is the final state of the cache block containing 0x4020 on core 1?
Question 2 (1 points): (see above) What is the final state of the cache block containing 0x5000 on core 1?
Question 3 (1 points): Suppose two threads A and B coordinate using a spinlock implemented based on atomic exchange like we dicussed in lecture. Assume, unlike xv6's spinlocks, these threads do not disable interrupts when acquiring the spinlock.
Both threads are running on a single-core system that uses a round-robin scheduler with a 20 millisecond time quantum. Thread A acquires the spinlock, then needs to do about 50 milliseconds of computation, then will release the spinlock and terminate. About 1 millisecond after thread A acquires the spinlock, thread B tries and waits to acquire the spinlock. Assuming thread A and B are the only runnable threads on the system (and never need to wait for I/O), how long will it be between when thread A acquires the spinlock and when it releases it?
Question 4 (1 points): In lecture, we discussed implementing a mutex that does not consume a lot of CPU
running a waiting loop by using a queue of waiting threads and a spinlock.
This mutex implementation may _____.
Select all that apply
Question 5 (1 points): Consider the following incomplete C code that uses the POSIX threading API:
pthread_mutex_t lock;
pthread_cond_t cv; // condition variable
int values[2];
int have_values[2] = {0, 0};
void SetValue(int index, int new_value) {
pthread_mutex_lock(&lock);
values[index] = new_value;
have_values[index] = 1;
if (___________________________________) {
pthread_cond_broadcast(&cv);
}
pthread_mutex_unlock(&lock);
}
void WaitForAndGetBothValues(int *values_copy) {
pthread_mutex_lock(&lock);
while (!have_values[0] || !have_values[1]) {
pthread_cond_wait(&cv, &lock);
}
values_copy[0] = values[0];
values_copy[1] = values[1];
pthread_mutex_unlock(&lock);
}
Ignore any minor syntax errors above and assume all condition variables, locks, etc. are appropriately initialized before any of the above code is run.
The WaitForAndGetBothValues
function is intended to wait for SetValue
to be called
with both index 0 and index 1, then return the resulting contents of the values array
(by copying it into an array located using the supplied pointer).
Which of the following would be best to place in the blank above to make
WaitForAndGetBothValues
behave correctly?
Question 1 (1 points): Consider the following code that uses semaphores:
sem_t a, b;
int x = 0;
void foo() {
sem_wait(&a);
write(STDOUT_FILENO, "A");
x = x + 1;
sem_post(&a);
write(STDOUT_FILENO, "B");
sem_wait(&b);
write(STDOUT_FILENO, "C");
}
void bar() {
sem_wait(&a);
while (x > 1) {
write(STDOUT_FILENO, "D");
sem_post(&b);
x = x - 1;
write(STDOUT_FILENO, "E");
}
sem_post(&a);
}
Suppose the a
semaphore is initialized to 1
and
the b
are semaphores is initialized to 0
by code not shown,
and no code other than the initialization code and the code above uses these semaphores
or changes the value x
.
Which of the following are possible outputs from multiple threads calling foo()
and bar()
multiple times?
Select all that apply
Suppose we have a multithreaded system for handling a ticket vending service. When dealing with the last few remaining tickets, this service attempts to offer each ticket to a user and wait until they complete the purchase of the ticket or decline to purchase it. While this is happening, another user can be waiting for a ticket to become available or for all tickets to be sold out.
Suppose, to implement this, the service provides the following functions:
bool WaitForNextTicket()
--- waits for a ticket to be available for purchase or for all tickets to be sold out. Returns true if a ticket is still available for purchase, in which case it is reserved until BuyTicket()
or DeclineTicket()
is called. Returns false is tickets are sold out.void BuyTicket()
--- purchases a ticketvoid DeclineTicket()
--- declines to purchase a ticketso the code that prompts a user for their decision looks something like:
if (WaitForNextTicket()) {
printf("Purchase ticket? ");
if (ReadInputFromUser() == YES) {
BuyTicket();
} else {
DeclineTicket();
}
} else {
printf("Tickets are sold out\n");
}
Question 2 (1 points): (see above) Suppose these functions are implemented using mutexes and condition variables.
The WaitForNextTicket()
function would most likely call pthread_cond_wait
____________.
Question 3 (1 points): (see above) Suppose these functions are implemented using POSIX semaphores.
Of the choices listed below,
it is most likely that the WaitForNextTicket()
function would
call sem_wait
on a semaphore whose value represents ______.
Consider a multithreaded system for processing college applications that stores most of its data in memory. It is written in C++ and uses the following structs:
struct Student {
pthread_mutex_t lock;
string name;
Application* current_application;
...
};
struct Application {
pthread_mutex_t lock;
Student *student;
AcademicYear *year;
string status; /* one of pending, accepted, admitted, rejected, withdrawn */
...
};
struct AcademicYear {
pthread_mutex_t lock;
set<Application*> pending, admitted, rejected, withdrawn;
};
and has the following functions relevant to the questions below:
bool WithdrawPendingStudent(Student *student) {
pthread_mutex_lock(&student->lock);
Application *app = student->current_application;
pthread_mutex_lock(&app->lock);
bool result;
if (app->status == "pending") {
pthread_mutex_lock(&app->year);
app->status = "withdraw";
app->year->pending.erase(app);
app->year->withdrawn.erase(app);
pthread_mutex_unlock(&app->year);
result = true;
} else {
result = false;
}
pthread_mutex_unlock(&app->lock);
pthread_mutex_unlock(&student->lock);
return result;
}
bool AdmitFirstKPending(AcademicYear *year, int K) {
pthread_mutex_lock(&year->lock);
for (int i = 0; i < K; ++i) {
Application *app = FindHighestRanked(&year->pending);
year->admitted.insert(app);
year->pending.erase(app);
pthread_mutex_lock(&app->lock);
app->status = "admitted";
pthread_mutex_unlock(&app->lock);
}
pthread_mutex_unlock(&year->lock);
}
(Please ignore minor syntax errors, etc. in the above, assume all appropriate headers are included, etc.)
Question 4 (1 points): (see above) Two simultaneous calls to WithdrawPendingStudent() and AdmitFirstKPending()
sometimes deadlock. When this occur, the thread calling
WithdrawPendingStudent() can be holding the lock in a _____ while
the thread calling AdmitFirstKPending() is trying to acquire the
same lock.
Select all that apply
Question 5 (1 points): (see above) Two simulatenous calls to WithdrawPendingStudent() and AdmitFirstKPending() sometimes deadlock. When this occur, the thread calling AdmitFirstKPending() can be holding the lock in a _____ while the thread calling WithdrawPendingStudent() is trying to acquire the same lock.
Suppose an x86 system which uses the same page table layout as xv6 (two-level page tables, 4KB pages, 32-bit virtal addresses, 4B page table entries) where:
0xAC000
(a byte address ,not a page number).0x4443
contains page number 0x123
and is marked as present0x4443
contains page number 0x456
and is marked as presentQuestion 1 (1 points): (see above) The first-level page table ("page directory") entry for virtual address
0x4443
is at what physical address? Write your answer as a hexadecimal number like 0xAC888
; if not enough information
is provided write unknown
.
Question 2 (1 points): (see above) The second-level page table entry for virtual address
0x4443
is at what physical address?
Write your answer as a hexadecimal number like 0xAC888
; if not enough information
is provided write unknown
.
Question 3 (1 points): (see above) The byte stored at virtual address 0x4443
is located at what physical address?
Write your answer as a hexadecimal number like 0xAC888
; if not enough information
is provided write unknown
.
Question 4 (1 points): Suppose wanted to make a system call in xv6 with the following prototype:
int cross_process_copy(char *source_address, int dest_pid, char *dest_address)
which would copy one byte from virtual address source_address
in the current process
to virtual address dest_address
in the destination process dest_pid
.
After retreiving the arguments for this system call, one potential implementation strategy
would look like the following:
char *real_source_address = convert_source_address(source_address);
char *real_dest_address = convert_dest_address(dest_pid, dest_address);
*real_dest_address = *real_source_address;
To make this work, the convert_dest_address
function should:
Question 5 (1 points): Suppose an operating system does not allocate memory for programs or load a program's data from disk until it attempts to use each page of it.
Assume the operating system is using a page table layout like xv6 (two-level page tables, 4KB pages, 32-bit virtal addresses, 4B page table entries).
Suppose a program accesses all of an 8192-byte global array for the first time. When this occurs, page faults may occur to allocate memory for the array.
How many page faults are required may depend on what addresses were allocated to the array (for example, whether the array starts in the middle of a page) and whether memory around the array was already accessed previously.
What numbers of page faults are possible?
Consider the following C code:
int fd1 = open("one.data", O_RDONLY);
if (fd1 < 0) handle_error();
char *p = mmap(NULL, 8192, PROT_READ, MAP_SHARED, fd1, 0);
if (p == (char*) MAP_FAILED) handle_error();
int fd2 = open("one.data", O_RDWR);
if (fd2 < 0) handle_error();
char *q = mmap(NULL, 8192, PROT_READ | PROT_WRITE, MAP_SHARED, fd2, 0);
if (q == (char*) MAP_FAILED) handle_error();
*q = 'a';
printf("%c", *p);
Assume all required headers are included and
ignore minor syntax errors above and assume open()
and mmap()
do not fail
(so handle_error()
is not reached).
Assume:
p
and q
contain different virtual addresses, andQuestion 1 (1 points): (see above) Immediately after the code above runs, which of the following statements are true?
Select all that apply
Question 2 (1 points): (see above) Immediately after the code above runs, which of the following operations
are likely to trigger a page or protection fault for a virtual address
assigned to the file one.data
?
Select all that apply
Question 3 (1 points): Suppose three processes A, B, and C are running. Processes A and B share the same executable
foo.exe
, and the code from the executable mapped into their address space.
Process C is running with a different executable and tries to use a lot of space on its
heap. In order to satisfy process C's requirement for more memory, the operating system chooses
to use space currently used for part of the machine code in foo.exe
. This space was loaded into memory since it
was first used by process A. However, process B most recently accessed the machine code in question.
As part of allocating the data for C, the operating system should ______.
Select all that apply
Suppose a system implements virtual memory with 5 physical pages used to satisfy a program's memory needs. To keep things simple, the physical pages are only used for the program's memory and not to cache file data, etc.
It is running two processes A and B, and each of those processes is assigned three virtual pages: for process A, A1, A2 and A3; and for process B, B1, B2, and B3.
Process A executes the following access pattern:
and process B executes the following access pattern:
Question 1 (1 points): (see above) Suppose the two processes are run so their accesses are interleaved, so the overall access pattern is:
Assuming all physical pages are initially empty, with an LRU replacement policy how many page replacements will occur? (Loading data into an physical page, regarldess of whether it was empty or not, is a page replacement.) Write your answer as a base-10 number.
Question 2 (1 points): (see above) Suppose the two processes are run one after the other, so the overall access pattern is:
Assuming all physical pages are initially empty, with an LRU replacement policy how many page replacements will occur? (Loading data into an physical page, regarldess of whether it was empty or not, is a page replacement.) Write your answer as a base-10 number.
Question 3 (1 points): In lecture we discussed an approximation of LRU called SEQ that:
Which of the following is true about this scheme?
Select all that apply
Question 4 (1 points): In some circumstances, readahead can dramatically increase the number of page cache misses (and therefore required amount of (re)loading data from disk). Suppose this is true even though the pages which are read ahead are, in fact, used before they are replaced.
Which of the follow are true about this scenario where readahead causes bad performance?
Select all that apply
Question 5 (1 points): Suppose a process takes input from the keyboard on a xv6-like operating system.
As part of obtaining input from the keyboard controller, the operating system needs to
use a control register on the device controller to determine whether there is input.
In which of the following scenarios is code that accesses the keyboard controller's control registers
likely to run?
Select all that apply
Question 1 (1 points): Suppose a network interface device uses direct memory access (DMA). It provides the following registers, each of which has corresponding memory address through which the operating system can set them:
(Assume no IOMMU is in use. If you do not know what an IOMMU is, then it won't affect your answer.)
In order receive input from the network using this device, it would be best for the operating system to set the register indicating the address of the input buffer to:
For these questions, consider a FAT-like filesystem with:
Question 2 (1 points): (see above) Ignoring space in the FAT itself and the header (BPB), how many clusters would be used (completely or partially) to store a FAT filesystem whose root directory contains 100 directories and each of those directories contains one 1000-byte text file? Write your answer as a base-10 number of clusters (like "500")
Question 3 (1 points): (see above) In the scenario described in the previous question, how many FAT entries would contain a value representing a cluster number? Write your answer as a base-10 number of clusters (like "500").
Question 4 (1 points): Consider the following strategies for assigning clusters to a large file on a FAT filesystem. Assume the operating system assigning clusters knows the number of clusters that must be assigned in advance and that this operating system always keeps a copy of the file allocation table in memory for performance.
Which of the following strategies would likely result in best performance when reading the file from a hard drive?
Consider an xv6-like filesystem with:
Question 5 (1 points): (see above) To store a directory with two 99.5KB files (where KB = 1024 bytes), how many blocks are used (outside of inodes, free block maps, etc.)? Include any blocks allocated to hold block pointers (outside of inodes) and space required for the directory.
Write your answer as a base-10 number (like "500").
Suppose a inode-based filesystem uses redo-logging to recover from failures.
As part of truncating a file from a large size to a smaller size it needs to write:
The filesystem performs this truncation operation atomically, meaning that on a failure, after reboot, either the file will be truncated or will have its original size.
Question 1 (1 points): (see above) What is a valid order for the filesystem to perform the above operations during normal operation?
Write your answer as a sequence of letters. For example, if you think it would be valid for each of the operations above occur exactly once and in the order written above, write "ABCDEFG".
Question 2 (1 points): We discussed the idea of "block groups" (like implemented in the Berkeley Fast File System) as a mechanism for keeping related data and metadata physically close on a hard drive.
In the Berkeley FFS scheme, each directory and the files it contains are assigned to a block group. This makes some assumptions about the typical access patterns for files.
For which of the following access patterns are many long seeks likely to be required
(assuming data is not cached) when using the block group scheme we described in lecture?
Select all that apply
Consider an inode-based filesystem that supports fragments for small files.
In this filesystem, each inode contains (among other fields): * a boolean which is true if the first direct block represent an extent and not a normal block * 12 direct block pointers * 1 indirect block pointer * 1 double-indirect block pointer * 1 triple-indirect block pointer
The filesystem uses 4KB blocks with 1KB fragments and directory entries that take 64 bytes.
Assume that the filesystem only uses fragments for files that fit within a single fragment.
For this question, 1KB = 1024 bytes and 1MB = 1024KB.
Question 3 (1 points): (see above) Suppose the filesystem is storing a directory with one thousand files where each of those files is 768 bytes. How many blocks will the filesystem use outside of inodes, free block maps, superblocks, etc.? Include space required for file data, directory entries and blocks of block pointers (outside of inodes). Write your answer as a base-10 number. (Assume the filesystem uses fragments in the most efficient way possible.)
Question 4 (1 points): (see above) Suppose the filesystem is storing a directory with 10 subdirectories, where each subdirectory contains a 10MB file. How many blocks will the filesystem use outside of inodes, free block maps, superblocks, etc.? Include space required for file data, directory entries and blocks of block pointers (outside of inodes). Write your answer as a base-10 number. (Assume the filesystem uses fragments in the most efficient way possible.)
Question 5 (1 points): Suppose one uses the client/server communication model we discussed in lecture in system with two clients A and Band one server S.
Suppose we want implement a way to send a message from a client A to another client B. To be most consistent with the client/server model, this would be implemented by ______.
Suppose we wanted to allow someone to read files remotely using RPC. To do this, we take the simple solution of exposing the following POSIX file I/O calls as RPC calls:
int open(const char *path, int flags)
int read(int fd, char* buffer, size_t size)
int close(int fd)
Question 1 (1 points): (see above) To convert the read()
call above into an RPC call, the resuling RPC call would most likely
Select all that apply
Question 2 (1 points): (see above) Which of the following statements about how this scheme deals with failures are true?
Select all that apply
Question 3 (1 points): Suppose an RPC system attempts to deal with errors by making sure that every message sending an RPC call or the return value of an RPC call receives an acknowledgment. If it does not, the RPC system retries sending that message until either it receives that acknowledgment or a certain amount of time has passed (in which case it signals an error to the calling program, if possible). In this scheme, if a client does not receive the return value of an RPC call after a certain amount of time, it also signals an error to the calling program.
Suppose network failures involving messages being lost or reoredered can occur
and machine failures can occur according to fail-stop model.
Which of the following are possible?
Select all that apply
Question 4 (1 points): Consider a system using two-phase commit that distributes a database across two worker machines A and B and uses a single coordinator machine C.
Suppose after preparing a transaction with machines A and B, machine C receives a vote to commit from machine A and machine B. Then, before machine C can act on that result, machine A goes down for a long time.
When machine A comes back up, what would be most appropriate for it to do first?
Question 1 (1 points): Suppose we want to provide the ability to run commands on a remote server using a stateless
server. With support from servers, which of the following features could we provide
that would be consistent with the stateless design?
Select all that apply
Suppose an RPC call takes 1000 microseconds to complete plus an additional microsecond for every kilobyte of data transferred in the RPC call plus the time requires to run the corresponding RPC function on the server.
Question 2 (1 points): (see above) Suppose this RPC system is used by NFSv2. A client application to open a 2048KB file and read the last 1024KB of it 128KB at time and then closes it. Assume the client does not perform any caching of file data. The client already has the file ID for the directory, but is missing one for the file itself. Ignoring the time required to run the RPC functions on the server that lookup file data (and so just including the RPC overhead), about how much time is required for this operation in microseconds?
(Assume the client does not need to check the attributes of the file as part of opening it.)
Question 3 (1 points): When programs run with special priviliges on behalf of other users (like set-user-ID programs), a major concern is time-to-check-to-time-of-use vulnerabilities. One common example of these is when a program checks if the user on whose behalf if it is running is allowed to access some file specified via a path, and then, later, it actually access the file, without verifying that the path referred to the same resource that was checked earlier.
What are some examples of operating system support that could be added to POSIX-like systems
that would be helpful for avoiding these problems? (In the options, file mode bits
refers to the very limited access control list POSIX supports via chmod
.)
Consider a virtual machine implemented using trap-and-emulate without special hardware support for virtualization.
Suppose a program is running in the virtual machine and executes a system call to read input from the keyboard. The guest OS, while handling this system call, switches to another program and runs it in user mode. Eventually, the hypervisor detects keyboard input. As a result, it decides to simulate an interrupt for the keyboard. Then, the guess OS reads a control register from the keyboard controller. Then, the guest OS switches the original program and runs it in user-mode.
Question 4 (1 points): (see above) During the process described above, which of the above operations are likely involve in an exception (of any kind, including
interrupts, traps, etc.) on the host machine
(that is, an exception handled by the hypervisor)?
Select all that apply
Question 5 (1 points): (see above) If the hypervisor is implemented using shadow page tables,
which of the above operations are likely involve changing which shadow
page table is active (or clearing/reconstructing a large portion of the shadow page table)?
Select all that apply