[ side note: it turns out that my explanation for interrupt delivery in IBM-style virtual machine systems was correct. The explanation and figure in the book is apparently supposed to be a simplified approach. ] Who is RMS? Richard Stallman, founder of the Free Software Foundation, and its chief spokesman, apparently. See picture at http://www.softpanorama.org/People/Stallman/Images/saintignucius.jpg See fun quote at http://www.gnu.org/philosophy/shouldbefree.html "When it is true at a given time that most people will work in a certain field only for high pay, it need not remain true. The dynamic of change can run in reverse, if society provides an impetus. If we take away the possibility of great wealth, then after a while, when the people have readjusted their attitudes, they will once again be eager to work in the field for the joy of accomplishment." Note that I've never released software under the GPL :-) What exactly is polling? It is the act of periodically checking for events, rather than going about your other work and being asynchronously notified that some event has occurred. In other words, a system that uses polling will have an explicit check in certain parts of the code - if it fails to do this check, the event completion can be lost. Explain handshaking, and how it relates to polling Handshaking is a term that refers to system interfaces that have a series of steps in a certain order - usually done at the hardware level, but can also apply to higher levels of the system. It relates to polling because polling-based systems will often check to see if something is ready, and then go through some mechanism to get the data. The mechanism used in systems designed for polling is often a simple handshaking-based mechanism. With respect to interrupts, what exactly do you mean by hardware support - what does it do? How does the hardware wait? Hardware that uses interrupts basically has to have support for stopping whatever it's doing, storing all of the state of the processor somewhere, and switching to something else temporarily. What this means is that parts of the chip basically have logic that says "go about your normal tasks unless this particular signal line has a particular state". The extra logic doesn't normally affect the performance of the processor, since this is built at a pretty low level. What is the actual delay difference between polling and interrupts? It all depends on what's involved. Processors with small amounts of state can save it all very quickly to process interrupts, and have very low interrupt times. Processors with lots of registers, etc., may take a lot longer to save all the info for interrupts. To derive some sane ballpark estimates, really high-speed web servers were able to send 350 Mbits/sec of data with a 333 MHz processor. That's about 45MB/sec, and each data packet is about 1500 bytes. So, about about 30K packets/sec, and assuming that each packet requires one interrupt, the full processing for each packet takes about 1/30 of a ms, or 33 microsecs. Notice that I haven't said anything about polling times. That's because polling when the device isn't ready is a device-specific operation. It might require as little as one access across the PCI bus, or the device may lock the bus for a little while. Why do different IRQs have different priorities? Assume a system has fast devices and slow devices, and that the fast devices are often busy. In this situation, if the fast devices got first shot at the processor, they might consume all processor time, preventing the slow device from getting service. This situation is known as starvation. So, by giving the slow device higher priority, it's possible to reduce the chance that it gets ignored. Likewise, if the slow device is slow because it does a lot of processing, it may be desirable to service that device as often as possible, so that it's always busy. What's the point of creating a really big empty file? Well, if I were a math geek, I could use it to store really large sparse matrices without taking up a lot of disk space. However, in this particular case, it's a way of creating a large space on disk that programs can treat as their own disk. In Project 3 when you build a file system, instead of doing it to the actual disk itself, you'll build it inside of one large file. Does context switch speed increase at the same rate as cpu speed? If now, how will this affect how program time slices are allocated? Context switch speed tends to not keep up with CPU speed. As a result, it's generally not efficient to shrink program time slices in proportion to the CPU speed. However, this is not a problem at all - programs often fall into two types - interactive and computational. For the latter, they don't care too much about what the time slice might be. For the former, as long as the response isn't too slow, it doesn't matter. In other words, they're limited by human speed. what's the difference between sigkill and sigstop? SIGKILL terminate process kill program SIGSTOP stop process stop (cannot be caught or ignored) Looks like the major difference is that SIGSTOP can't be caught or ignored. How small is a disk head? If we believe http://magazine.fujitsu.com/us/vol37-2/paper10.pdf it looks like they're currently about .1 micron * .1 micron. This is for the "giant magnetoresistive" (GMR) head. What exactly is a page? It's one of the units that used to organize the memory system. Memory obvious contains bits which are grouped into bytes, and then words. These are often grouped into cache lines. However, even cache lines are relatively small - on the order of 32 bytes. When you need to break up the memory range into pieces that are more easily manageable, that's when pages come into play. Page sizes aren't anything magical - they're usually picked to be something that breaks memory into "enough" pages without breaking it into "too many" pages. The pages sizes on modern systems tend to be 4KB or 8KB for normal pages. Special pages can have larger sizes, up to several MB on some systems. Why did quiz question 3 specify a 32-bit system? Because you have to give a certain number of 0's and f's when labeling the top and bottom of the address space, and by giving the number of bits in the system, it tells you how many of each to put. In visual basic, is "on error resume next" telling the program to ignore seg faults? Not sure, but it seems like a possibility. I would have thought that you didn't have forgeable pointers in Visual Basic, but maybe I'm wrong. When disks go bad, what is it that tends to be failing? Can you explain memory mapped files and their drawbacks? Are signals only sent from the OS to the program? Process can "send" signals to each other, but what really goes on behind the scene is that one process asks the OS to send a particular signal to the other process. So, for all practical purposes, only the OS sends signals to programs. What happens if the process intentionally ignores signals? how does the OS kill the process? There are some signals that the program isn't allowed to ignore. These signals are what are used to kill the program. What are the major difference between kernel stack and user stack? For the time being, we can think of a user process having a single stack, so it can grow until it meets the heap. In contrast, the space allocated to the kernel has to contain all of the kernel stacks, and each process usually has its own kernel stack. So, kernel stacks can't grow "arbitrarily" large, since they all have to be allocated in a relatively small region. Please don't encourage people to dance around in their underwear. Not all people who dance in their underwear can generate sales of $20M/week, so I don't think we've got anything to worry about. The Akamai founder was a Princeton grad Yeah, Tom Leighton was a Princeton undergrad, I believe, but he was working as an MIT professor when he started Akamai. That's why I referred to it as an MIT company. Why do Mac Powerbooks use less power than PC notebooks - is it the drive, other factors, or just chance? If I had to guess, I would think that it's because the Motorola PowerPC chips are often designed to have low power use - they have a pretty good market share in the embedded systems space, where heat and power are more of an issue. So, Apple benefits from this design. Don't take that as authoritative, however. I've heard the term "batch programs" to refer to systems that don't multitask, so that each program is run to completion before the next one runs. In class, you mentioned batch jobs being ones where the operator ran a batch of programs together. Which is correct? It's basically the evolution of the same concept. When all programs were run in batches, there wasn't the opportunity for interaction with the system on short time scales. As systems started adding interactive capability, there had to be some term to refer to the programs that aren't interactive, and that's how "batch" got recycled. Did you say that disks can't write information as densely on the outer tracks of disks? If the disk is using a constant # of sectors/track, the outer sectors will be physically larger than the inner sectors, so they will have lower information density even though the medium is capable of more. That's why disks aren't commonly designed with a constant number of sectors/track these days. How can sector size vary on a single disk? Some disks do support multiple sector sizes. A disk is nothing more than a set of platters coated with a magnetic medium. The disk logic can decide to separate the regions of disk in whatever way it wants. Since disk size is getting to the point where there is almost more space than anyone can use, how much emphasis is placed on increasing size versus performance. Many people like to brag about how big their disks are, even if they have no possible use for them in the near future. Remember that a lot of computing has entered a commodity market, and things like consumer perception are often as important (if not more important) than technical merit. Realize that disk gigabytes mean 109, rather than 230. It's the victory of marketing over engineering. There's a revealing quote in the following article that discusses IBM moving some high-end features into low-end drives: http://news.com.com/2100-1040-960028.html In particular, the IBM guy says that the market is still about capacity. SCSI hard drives seem to have higher rpms than IDE - is bandwidth the limiting factor? As far as I know, it's not a technical limitation, but rather one of what the market wants. If someone were to build an IDE drive that spun at higher speed, most people who buy IDE drives wouldn't care, and wouldn't pay the extra for it. In contrast, the people who buy SCSI drives are already willing to pay extra for performance, so they'll pay extra for the higher rotation rates.