COS 126 Lecture 12: Sequential circuits

Combinational vs. Sequential circuits - slide 1

We've been looking at combinational circuits so far. The outputs of any of the circuits were determined by the combination of the inputs. We couldn't have 'loops' with these circuits. By loops, we refer to the condition of having the output of a circuit connected to its input (usually indirectly.)

Sequential circuits have loops - these enable circuits to receive feedback. You may have learned about feedback in other science classes. A simple example of a system dependent on feedback is a thermostat. It controls the temperature by receiving feedback in the form of a temperature reading. A diagram of this system is:

   |<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
   |                                              ^
read_temperature = too high => lower temperature--|
  ^         |
  ^         = too low => raise temperature--|
  ^                                         |
  |<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

This system has loops. The output of the temperature changing functions affects the input of the temperature-checking function. Its output affects the operation of the temperature changing function(s).

Now, we can't just use the type of circuit system we've been using. The reason for this is that in the circuit-systems we've looked at, we have no control over when the inputs are read. Before we make an action based on an input value, we want to make sure that that input value is correct (or valid.) How can the value be invalid? If the function (or circuit-system) that computes it hasn't finished its execution yet, the value won't have the correct value yet (E.g., with the adder circuit the result would be disastrous if we computed the second bit before knowing whether there would be a carry from the first bit.)

With sequential circuits, we add the control of a clock. By using a clock (which is nothing more than an alternating sequence of 0's and 1's), we can ensure that certain events occur in the correct order.


slide 2 - FLIP-FLOP

This slide shows the implementation of an S-R flip-flop. You can use a truth table to better understand how this circuit works. However, in addition to the input values of S & R, you also need to consider the value 'stored' in the flip-flop. (It is physically stored on that bottom looping wire.) In other words, you can determine the 'next state' (which corresponds to the output of a combinational function) by looking at the values of S, R, and the current state. Here's the associated truth table:

S | R | state | output
----------------------
0 | 0 |   0   |   0 - the output = S OR ((NOT R) AND state)
0 | 0 |   1   |   1 - NOT R AND state => 1, so the output is 1
0 | 1 |   0   |   0 - the input to the AND gate, R, is inverted. 0 AND 0 => 0
0 | 1 |   1   |   0 
1 | 0 |   0   |   1
1 | 0 |   1   |   1
1 | 1 |   0   |   x - these 2 states are officially undefined, b/c you don't want
1 | 1 |   1   |   x   set and reset at the same time!

Just like with decoders and adders (in lecture 11), you should try to think of flip flops on a higher level than their gate implementation. A flip-flop stores a bit. That stored bit can always be read (on the output line.) Depending on the particular type of flip-flop, the semantics of storing the bit will vary. Slide 6 talks about different types of flip-flops.

This slide also introduces 'timing diagrams.' Timing diagrams are used to better understand the operation of sequential functions. They are to sequential circuits what truth tables are to combinational circuits. They show the values of inputs over time and the resulting output. When the 'line' is raised, it corresponds to a value of 1; when it is flat, it corresponds to a value of 0. They are read from left to right, like a time-line. The sequence of each input value is given: in this case, S & R. The output can be computed based on the values of the inputs. In the first diagram, 'out' has value 0 until set has the value of 1. Set 'sets' the value of the output to 1. It stays at 1, until reset has value 1. Reset 'resets' the value stored in the flip-flop to 0 (and thus the output is 1.)

A clock is just an alternating sequence of 0's and 1's that has a set frequency. Clocks are used to control the sequence of events. Usually, the clock (wire) and the input it controls are combined with an AND gate. The input will only be activated if the clock has value 1. A clocked flip-flop is shown on this slide. It works exactly the same except that SET and RESET only 'work' if the clock has value 1. Looking at the timing diagram, the first time R has value=1, it has no effect, because the clock has value 1.


slide 3 - Register

A register is just a sequence of flip flops. In TOY, we had 16-bit registers. A series of 16 flip-flops can be used to form one of these registers. The clock (and the extra control of a load line) is used to ensure that new values are loaded only when they are supposed to be. Here's how it works. If LOAD has value 0, all the AND gates have output 0, so neither S nor R (on any of the flip-flops) has value=1. The register is unchanged, which is what we want. If LOAD has value 1 and a bit, x, has value 1, then the AND gate connected to the S input of its flip-flop will have output 1. So, SET will be activated, and a 1 will be stored in that register. Alternatively, if bit x has value 0, the AND gate corresponding to its RESET input will have value 1, and a 0 will be stored in that flip-flop.

All the values will be set in parallel.

The slide mentions several applications for registers.


slide 4 - Memory (bits)

To implement memory, we need to have a control to handling reading in addition to writing. We also want to be able to control which flip flop we read. The SELECT control selects the desired flip-flop using a decoder. By connecting the SELECT lines to the input of the decoder, we can control which flip-flop we read.

This slide demonstrates how computers are built up with components such as decoders and flip-flops. Each time we create a new 'system,' we can think of it abstractly, as a 'black box.' As long as we know how many inputs and outputs a circuit has and understand the function it performs, we can represent this component of the computer more abstractly. The Memory Bit 'black box' on the right of this slide is a good example of this.

Think back to what we've already learned to appreciate the power of this abstraction. It hides a lot of complexity. The Memory Bit consists of a Decoder and Flip-Flops. Decoders and Flip-Flops are built up of the primitive gates, like AND/OR/NOT, and these gates are built from transistors. Attempting to draw this Memory Bit system out of transistors alone would be extremely tedious, and it's likely that we would make a mistake. The idea of abstraction is central to the architecture of computers (and computer science in general.)


slide 5 - Memory (words)

To build memory, memory-bits can be combined, much like we did for registers. Don't worry too much about the actual implementaion. Just try to understand the overall idea: look at the drawing of memory in the lower right-hand corner of the slide. Memory has as its inputs an address (addr), input bits (in), and a read/write bit (R/W). The read/write bit controls whether we are reading or writing to memory. The address specifies which memory cell we want to access. If we're writing, the input bits will overwrite whatever was previously stored at the location addressed by the addr bits. If we're reading, the output bits will get the value of whatever is stored at the addressed memory cell.


slide 6 - Other kinds of flip-flops

A D-type flip-flop uses a S-R flip-flop for its implementation. The input D controls what is written to the flip-flop. If the clock is active (value=1), then the flip-flop will get D's value (0 or 1). A timing diagram is given for a 0-1-1 sequence of D values.

This slide also talks about edge-triggered flip-flops. Edge-triggered flip-flops are more controlled. Adding a clock to a flip-flop increased the amount of control by maintaining the property that the value of the flip-flop can only change if the clock has value 1. The value stored in an edge-triggered flip-flop can only be changed on the edge (either the rising or falling edge) of the clock 'tick.' (We think of a clock tick as its 1-values.)

Other combinations and implementations are also possible. In a master-slave flip-flop, a bit is stored when the clock has value 1 and then 0. So, it is stored only on a 'falling' edge. A timing diagram illustrates this.


slide 7 - Counter

By chainging together master-slave flip-flops, a counter can be created. The output of each flip-flop is the input into the next flip-flop. Later flip-flops have longer periods because of the delay and the fact that the flip-flops are edge-triggered. The timing diagram for this is turned sideways (to fit on the slide, I presume.) Rotate your book 90 degrees clockwise to read it correctly. The top line is the clock. Interpret the next lines as a binary number, read from bottom to top (the top line corresponds to the least significant bit, and the bottom line correspond to the most significant bit). Each clock pulse increments the binary number by one. So the circuit counts clock ticks.


slide 8 - High-level view of computer

This is a big jump upwards in terms of level of abstraction. The clock is critical to the operation of the computer.