COS418 Assignment 4: Raft Log Consensus


This is the second in the series of assignments in which you'll build a fault-tolerant key/value storage system. You've started off in Assignment 3 assignment by implementing the leader election features of Raft. In this assignment, you will implement Raft's core features: log consensus agreement. From here, Assignment 4 will be a key/value service that uses this Raft implementation as a foundational module.

While being able to elect a leader is useful, we want to use Raft to keep a consistent, replicated log of operations. To do so, we need to have the servers accept client operations through Start(), and insert them into the log. In Raft, only the leader is allowed to append to the log, and should disseminate new entries to other servers by including them in its outgoing AppendEntries RPCs.

If this sounds only vaguely familiar (or even if it's crystal clear), you are highly encouraged to go back to reread the extended Raft paper, the Raft lecture notes, and the illustrated Raft guide. You should, of course, also review your work from Assignment 3, as this assignment directly builds off that.


You will continue to use the same cos418 code bundle from the previous assignments. For this assignment, we will focus primarily on the code and tests for the Raft implementation in src/raft and the simple RPC-like system in src/labrpc. It is worth your while to read and digest the code in these packages again, including your implementation from Assignment 3.

Part I

In this lab you'll implement most of the Raft design described in the extended paper, including saving persistent state and reading it after a node fails and then restarts. You will not implement cluster membership changes (Section 6) or log compaction / snapshotting (Section 7).

A set of Raft instances talk to each other with RPC to maintain replicated logs. Your Raft interface will support an indefinite sequence of numbered commands, also called log entries. The entries are numbered with index numbers. The log entry with a given index will eventually be committed. At that point, your Raft should send the log entry to the larger service for it to execute.

Your first major task is to implement the leader and follower code to append new log entries. This will involve implementing Start(), completing the AppendEntries RPC structs, sending them, and fleshing out the AppendEntry RPC handler. Your goal should first be to pass the TestBasicAgree() test (in test_test.go). Once you have that working, you should try to get all the tests before the "basic persistence" test to pass before moving on.

Only RPC may be used for interaction between different Raft instances. For example, different instances of your Raft implementation are not allowed to share Go variables. Your implementation should not use files at all.

Part II

The next major task is to handle the fault tolerant aspects of the Raft protocol, making your implementation robust against various kinds of failures. These failures could include servers not receiving some RPCs and servers that crash and restart.

A Raft-based server must be able to pick up where it left off, and continue if the computer it is running on reboots. This requires that Raft keep persistent state that survives a reboot (the paper's Figure 2 mentions which state should be persistent).

A “real” implementation would do this by writing Raft's persistent state to disk each time it changes, and reading the latest saved state from disk when restarting after a reboot. Your implementation won't use the disk; instead, it will save and restore persistent state from a Persister object (see persister.go). Whoever calls Make() supplies a Persister that initially holds Raft's most recently persisted state (if any). Raft should initialize its state from that Persister, and should use it to save its persistent state each time the state changes. You can use the ReadRaftState() and SaveRaftState() methods for this respectively.

Implement persistence by first adding code to serialize any state that needs persisting in persist(), and to unserialize that same state in readPersist(). You now need to determine at what points in the Raft protocol your servers are required to persist their state, and insert calls to persist() in those places. Once this code is complete, you should pass the remaining tests. You may want to first try and pass the "basic persistence" test (go test -run 'TestPersist1$'), and then tackle the remaining ones.

You will need to encode the state as an array of bytes in order to pass it to the Persister; raft.go contains some example code for this in persist() and readPersist().

In order to pass some of the challenging tests towards the end, such as those marked "unreliable", you will need to implement the optimization to allow a follower to back up the leader's nextIndex by more than one entry at a time. See the description in the extended Raft paper starting at the bottom of page 7 and top of page 8 (marked by a gray line).

Resources and Advice

  • Remember that the field names any structures you will be sending over RPC (e.g. information about each log entry) must start with capital letters, as must the field names in any structure passed inside an RPC.
  • Similarly to how the RPC system only sends structure field names that begin with upper-case letters, and silently ignores fields whose names start with lower-case letters, the GOB encoder you'll use to save persistent state only saves fields whose names start with upper case letters. This is a common source of mysterious bugs, since Go doesn't warn you.
  • While the Raft leader is the only server that causes entries to be appended to the log, all the servers need to independently give newly committed entries to their local service replica (via their own applyCh). Because of this, you should try to keep these two activities as separate as possible.
  • It is possible to figure out the minimum number of messages Raft should use when reaching agreement in non-failure cases. You should make your implementation use that minimum.
  • In order to avoid running out of memory, Raft must periodically discard old log entries, but you do not have to worry about garbage collecting the log in this lab. You will implement that in the next lab by using snapshotting (Section 7 in the paper).
  • Submission

    Submit your code to the COS418 Assignment 4 Dropbox. You may submit multiple times, only the one in the Dropbox at the time of grading will be recorded.

    The Dropbox script will run only a small number of tests (though you should of course pass these!). It is not a substitute for doing your own testing on the full go test cases detailed above. Before submitting, please run the full suite of tests with go test one final time. You are responsible for making sure your code works.

    You will receive full credit for Part I if your software passes the tests mentioned for that section on the CS servers. You will receive full credit for Part II if your software passes the tests mentioned for that section on the CS servers.

    The final portion of your credit is determined by code quality tests, using the standard tools gofmt and go vet. You will receive full credit for this portion if all files submitted conform to the style standards set by gofmt and the report from go vet is clean for your raft package (that is, produces no errors). If your code does not pass the gofmt test, you should reformat your code using the tool. You can also use the Go Checkstyle tool for advice to improve your code's style, if applicable. Additionally, though not part of the graded cheks, it would also be advisable to produce code that complies with Golint where possible.


    This assignment is adapted from MIT's 6.824 course. Thanks to Frans Kaashoek, Robert Morris, and Nickolai Zeldovich for their support.

    Last updated: 2017-11-09 11:45:27 -0500