How do segments & pages work in the x86 The original x86 segment approach used the "segment" to just expand the address space - the address was calculated as (segment value * 16) + offset. Other (non-x86) segment schemes which had a small number of segments also had to deal with pages. What they did in some cases was to have pages within the segments, rather than dealing with moving entire segments in and out of memory. What the x86 usage seems to be these days (off the top of my head) is to use the 32-bit offset within the segment to get the 32-bit address space. This basically reduces the ugly segmentation scheme into a plain paging scheme. However, I'm not sure if this is the way that it's implemented in hardware. The segments still exist in some form, as evidenced by the fact that the chips can generate 36 bits of physical address. 32 bit machines can have 64GB memory Note that this was just referring to the x86 - a new 32-bit design is unlikely to have support for more than 32 bits of physical address. Why would Arizona only use 1/4 of its memory? The processes running on it weren't consuming much memory, so most of the memory on the machine was in the "free" state. So, the programs could still get this memory if needed, but just weren't generating the demand. How does the process inform the OS that it needs more memory when using virtual memory? The OS is what gives out the memory in the first place, so the processes don't have to explicitly tell it about their current memory requirements. The heap can be increased by the application (via the libaries) calling brk/sbrk (or their modern equivalents). The OS uses some virtual memory tricks to know how much of the stack is being used. If you're running a set of processes that use less total memory than the physical memory of the machine, is virtual memory still used? The mapping aspects of it are still being used, allowing all processes to deal with virtual address spaces rather than physical address spaces. However, no use of swap space on disk would be expected. Was PTE shorthand for Page Table Entry? Yes Why are caches so much slower than registers if they're both on-chip? If caches used the same design style as registers, they'd be faster, but much larger. So, there's a tradeoff between how fast you can make your cache and how much of it can fit on a chip. People generally opt for a larger cache that's moderately slower than registers rather than a really tiny one that's only a little slower. Do user programs that use overlays know anything about the overlays? In the original days, yes. Then again, those programs were often written in assembly. Toward the end of the DOS days, programs on the x86 did start using mechanisms that looked like overlays, but these often had compiler support so that all of the mechanism was hidden. When do we get grades for project 1? I've asked the TA, and he's promised me this weekend. Is it feasible to build computers that don't have disks at all, and just run everything in main memory? Yes - in some environments, this is done to avoid the possibility of mechanical disk failures. There are non-volatile memory devices that have disk interfaces, and these are being used to store the OS and the necessary programs. Could we move Dawson's Creek to a stretch break? We could, but on the lectures that require a little more attention, I've found it's useful to "cleanse the palate" at the beginning of the class. This particular lecture's been problematic in the past since it's covering a variety of related schemes. Hence the decompression at the beginning. It seems like disks must become outdated in the future to deal with digital expansion - if disks become too slow, is there research in optical computers or other digital storage? Some replacements exist, such as "solid state disks". However, what gets manufactured is largely a question of what the market wants. There's still enough impetus to keep making disks the same way, and to periodically reduce form factor as density increases enough. Not sure if there's a direct tie-in with optical computers, since those are looking at speeding up the processors (and hence are at the other end of the memory hierarchy). By hardware solutions, do you mean the base+bounds registers? All of the other diagrams (base+bounds, segmentation, paging) use hardware support to make things faster. What do you mean by "relocation through hardware"? In general, the virtual address gets translated to a physical address by the hardware. I believe on one particular slide, we were talking about multiple instances of a process having different data segments, and everything is being addressed physically. In this case, one way of avoiding "fixing up" the references in the text segment is to have all references to global data go indirectly through a register. Then, all of you have to do is set that register appropriately, and multiple instances of a program can share the same text segment without modification. If there are multiple offset registers, how does the process know which parts of the program correspond to which offset? The translation isn't something that the process would have to necessarily worry about. If you have 4 program segments, one way of doing it would be to have the two high order bits of the virtual address determine the offset register. This would limit each program segment to 1/4 the virtual address space, but it would be fast. The hardware could also have a table that says what portion of the virtual address space each register is handling. How big of a performance hit is virtual memory? Anything from practically unnoticeable to really painful. As long as programs exhibit good locality in their memory accesses and don't consume more memory than is physically available, virtual memory costs almost nothing. If programs are really trying to regularly access lots more memory than they have, the disk speed will be obvious. If programs have poor locality and overwhelm the virtual memory hardware, they'll slow down even if they aren't using more than the amount of physical memory. When the process makes a system call, does it work in its own address space? If the process is user-mode, does this mean that both user mode and kernel mode are accessible at the same time? That's the whole point of having the kernel mapped into the address space of every process. When the process makes a system call, the kernel can keep the address space mappings and access the user memory directly. The kernel memory is protected, so the process can't access it, even though it's in the same virtual address space. Granularity of pages I noticed that some people commented that pages had large granularity. Actually, 4KB (a common value for page size) are relatively small compared to main memory. On a machine with 512MB of memory, that works out to 128000 pages. How do pages relate to today's lecture - it seems like we just talked about how process's address spaces relate to virtual memory. We talked about various schemes for relating them, and got to the paging approach. The segment approach is mostly historical. Pages can be thought of as really small segments, but as shown above, there are far more pages in the system than there would be segments, so you can't easily extend the hardware support for segments into hardware support for pages. The next lecture will focus mostly on page-based virtual memory. Explain hardware versus software approach for memory management (slide 9) If you have hardware support for doing virtual-to-physical translations, then you don't need to worry about two programs using the same virtual address, because they can be mapped to use different physical addresses. If you don't have hardware support and want to give two programs similar behavior, before running a progam, you need to check to see if any of the memory its using is already in use by another program. In that case, you can write the other program to disk and load the program you want to run. In other words, two programs can only coexist in memory if their addresses don't overlap. For the IBM 360 (p 196 of the book), are user programs restricted to only 2KB since they used a 4-bit protection code. No - they could access all the 2KB blocks that matched their protection code. So, one way of thinking about this was that memory had a protection granularity of 2KB. If you want to make sure that two programs couldn't interfere with each other, you couldn't put them within the same 2KB block - they'd have to be in different blocks and have the codes on those blocks differ. Each one could have as many 2KB blocks as the OS allowed. Emacs files with color? put the following and ".emacs" (defun emacs-major-version-id () (if (string-match "Emacs 18" (emacs-version)) 18 emacs-major-version)) ;; variable first existed in v19.23 (if (and window-system (not (string-match "XEmacs" emacs-version)) (>= (emacs-major-version-id) 19)) (load "~/.emacs.19.highlight")) and put the following in ".emacs.19.highlight" (setq hilit-mode-enable-list '(not text-mode) hilit-background-mode 'light hilit-inhibit-hooks nil hilit-inhibit-rebinding nil) (require 'hilit19) (setq hilit-quietly t) (defun hilit-return () "Hilit a line when the return key is hit!!!!" (interactive) (if (equal mode-name "Text") (newline) (setq current-point (point)) (beginning-of-line 0) (setq b (point)) (goto-char current-point) (newline) (hilit-rehighlight-region b (point)) ) ) (defun hilit-function () "Hilit a function!" (interactive) (if (equal mode-name "Text") () (setq b (beginning-of-defun)) (setq e (end-of-defun)) (hilit-rehighlight-region b e) ) ) (global-set-key '[C-f2] 'hilit-function) (global-set-key '[C-f1]' hilit-rehighlight-region) ;; (global-set-key "\C-l" 'hilit-rehighlight-buffer) (hilit-associate 'hilit-face-translation-table 'comment 'darkviolet-italic) (hilit-associate 'hilit-face-translation-table 'define 'navy-bold) (hilit-associate 'hilit-face-translation-table 'defun 'navy-bold) (hilit-associate 'hilit-face-translation-table 'include 'navy-bold) (hilit-associate 'hilit-face-translation-table 'keyword 'brown-bold) (hilit-associate 'hilit-face-translation-table 'string 'forestgreen) (hilit-associate 'hilit-face-translation-table 'declaration 'darkgoldenrod) (hilit-associate 'hilit-face-translation-table 'decl 'brown-underline) (hilit-associate 'hilit-face-translation-table 'struct 'orange-bold) (hilit-associate 'hilit-face-translation-table 'function 'sienna) (hilit-associate 'hilit-face-translation-table 'type 'steelblue-bold) (hilit-associate 'hilit-face-translation-table 'dired-directory 'magenta-bold) (hilit-associate 'hilit-face-translation-table 'dired-link 'darkgreen-italic) (hilit-associate 'hilit-face-translation-table 'dired-ignored 'chocolate) (hilit-associate 'hilit-face-translation-table 'dired-deleted 'coral) (hilit-associate 'hilit-face-translation-table 'dired-marked 'orange) (hilit-associate 'hilit-face-translation-table 'label 'slateblue-underline) (hilit-associate 'hilit-face-translation-table 'error 'maroon) (hilit-associate 'hilit-face-translation-table 'warning 'navy) (hilit-associate 'hilit-face-translation-table 'msg-subject 'slateblue-bold) (hilit-associate 'hilit-face-translation-table 'msg-header 'tomato-bold) (hilit-associate 'hilit-face-translation-table 'msg-quote 'green3) (hilit-associate 'hilit-face-translation-table 'named-param 'DarkSlateBlue) (hilit-associate 'hilit-face-translation-table 'crossref 'coral) (hilit-associate 'hilit-face-translation-table 'formula 'DarkSlateBlue) (hilit-associate 'hilit-face-translation-table 'rule 'Red-bold-underline) ;; types (set-foreground-color "black") (set-background-color "white") (set-cursor-color "black") (set-face-foreground 'region "black") (set-face-background 'region "gray") (set-face-foreground 'secondary-selection "black") (set-face-background 'highlight "coral") (set-face-background 'secondary-selection "paleturquoise") (set-face-foreground 'highlight "lavender") (hilit-add-pattern "\\\\[@a-zA-Z\*]+" "" 'defun 'latex-mode nil) ; (add-hook 'latex-mode-hook ; (lambda () ; (hilit-add-pattern "\\\\[@a-zA-Z]+" "" 'defun 'latex-mode nil) ; (hilit-rehighlight-buffer) ; ))