BFT: Speculative Byzantine Fault Tolerance With Minimum Cost
Practical Byzantine Fault Tolerance
-
Upload
mansu -
Category
Technology
-
view
1.627 -
download
2
description
Transcript of Practical Byzantine Fault Tolerance
Practical Byzantine Fault Tolerance
Castro & Liskov- Suman Karumuri
Byzantine fault
• A process behaves in an inconsistent manner.
• Can’t be solved unless• n > 3 * f + 1
– n: number of processes– f: number of faults.
Replicated State machine problem
• The state of the system is modeled as a state machine with– State variables.– Operations.
• Reads• Updates
• The state of the machine is replicated across processes some of which are byzantine.
• Goals:– Keep all the state machines consistent.– Do it efficiently.
Previous Work
• Previous work is not really practical. Too many messages.
• Assume synchrony.– Bounds on message delays and process
speeds.
The Model• Networks are unreliable
– Can delay, reorder, drop,retransmit• Adversary can
– Can co-ordinate faulty nodes.– Delay communication.
• Adversary can’t– Delay correct nodes.– Break cryptographic protocols.
• Some fraction of nodes are byzantine– May behave in any way, and need not follow the protocol.
• Nodes can verify the authenticity of messages sent to them.
Service properties
• Every operation performed by the client is deterministic.
• Safety: The replicated service satisfies linearizability.
• Liveness: Client’s eventually receive replies to their requests if– floor(n-1/3) replicas are faulty and– the network delay does not grow faster than time.
• Security and privacy are out of scope.
Assumptions about Nodes• Maintain a state
– Log– View number– State
• Can perform a set of operations – Need not be simple read/write– Must be deterministic
• Well behaved nodes must:– start at the same state– Execute requests in the same order
Views
• Operations occur within views• For a given view, a particular node in is
designated the primary node, and the others are backup nodes
• Primary = v mod n– N is number of nodes– V is the view number
High level Algorithm
1. A client sends a request to invoke a service operation to the primary
2. The primary multicasts the request to the backups
3. Replicas execute the request and send a reply to the client
4. The client waits for f+1 replies from different replicas with the same result; this is the result of the operation
High level Algo contd..• If the client does not receive replies soon
enough – it broadcasts the request to all replicas. If the request
has already been processed, the replicas simply re- send the reply.
• If the replica is not the primary, – it relays the request to the primary.
• If the primary does not multicast the request to the group– it will eventually be suspected to be faulty by enough
replicas to cause a view change
Protocol
• It is a three phase protocol– Pre-prepare: primary proposes an order – Prepare: Backup copies agree on # – Commit: agree to commit
Request to Primary{REQUEST,operation,ts,client}
Pre-prepare message<{PRE_PREPARE,view,seq#,msg_digest}, msg>
Prepare message{PREPARE,view,seq#,msg_digest,i}
Prepared wheni receives 2f prepares that match pre- prepares.
Commit message{COMMIT,view,seq#,msg_digest,i}
Committed_localIf prepared(i) and received 2f+1 commits
Reply{REPLY,view,ts,client,I,response}
Truncating the log• Checkpoints at regular intervals• Requests are in log, or already stable• Each node maintains multiple copies of state:
– A copy of the last proven checkpoint– 0 or more unproven checkpoints – The current working state
• A node sends a checkpoint message when it generates a new checkpoint
• Checkpoint is proven when a quorum agrees– Then this checkpoint becomes stable– Log truncated, old checkpoints discarded
View change
• The view change mechanism– Protects against faulty primaries
• Backups propose a view change when a timer expires– The timer runs whenever a backup has accepted
some message & is waiting to execute it.– Once a view change is proposed, the backup will no
longer do work (except checkpoint) in the current view.
View change 2
• A view change message contains– # of the highest message in the stable checkpoint
• And the check point messages
– A pre-prepare message for non-checkpointed messages
• And proof it was prepared
• The new primary declares a new view when it receives a quorum of messages
New view
• New primary computes– Maximum checkpointed sequence number– Maximum sequence number not
checkpointed• Constructs new pre-prepare messages
– Either is a new pre-prepare for a message in the new view
– Or a no-op pre-prepare so there are no gaps
New view 2
• New primary sends a new view message– Contains all view change messages– All computed pre-prepare messages
• Recipients verify:– The pre-prepare messages– The have the latest checkpoint
• If not, they can get a copy
– Sends a prepare message for each pre-prepare– Enters the new view
Controlling View Changes• Moving through views too quickly
– Nodes will wait longer if• No useful work was done in the previous view
– I.e. only re-execution of previous requests\– Or enough nodes accepted the change, but no new view was
declared
– If a node gets f+1 view change requests with a higher view number
• It will send its own view change with the minimum view number
• This is safe, because at least one non-faulty replica sent a message
Optimization• Don’t send f+1 responses
– Just send the f digests and 1 response– If that doesn’t work, switch to old protocol.
• Tentative commits– After prepare, backup may tentatively execute
request ( fails on view-change requests)– Client waits for a quorum of tentative replies,
otherwise retries and waits for f+1 replies– Read-only
• Clients multicast directly to replicas• Replicas execute the request, wait until no tentative request
are pending, return the result• Client waits for a quorum of results
Micro benchmark
• Worst case.• Not so much difference on real work loads.• Still faster than Rampart and Securing.
Benchmark• Andrew benchmark • Software development workload. 5 phases
– Create subdirectories – Copy source tree– Look at file status– Look at file contents– Compile
• Implementations compared– NFS– BFS strict– BFS (lookup, read are read only)
Results
Fault-Scalable Byzantine Fault-Tolerant Services
Fault-scalability
• Fault-scalable service is one in which performance degrades gradually, if at all, as more server faults are tolerated.
• BFT is not fault-scalable.• Graceful-degradation in web parlance.
Query/Update(Q/U) Protocol• The Q/U protocol is byzantine fault tolerant and
fault scalable.• Provides same operations and interfaces like
replicated state machine systems.• Uses 3 techniques:
– Optimistic non-destructive updates.– Minimizes consensus by using versioning with logical
time stamping scheme.– Uses preferred quorum, to minimize server to server
communication.• Cost: Needs 5f+1 servers instead of 3f+1
servers for fault-scalable BFT.
Quorum vs Agreement protocols
• In a Quorum, only subsets of servers process requests.
• In agreement based systems, all servers process all requests.
Efficiency and Scalability
• No need of prepare or commit phase because of optimism.
Throughput-scalability
• Additional servers, beyond those necessary to provide the desired fault tolerance, can provide additional throughput
• No need to partition the services to achieve scalability.
System model• Clients and servers
– Asynchronous timing.– May be Byzantine faulty– Computationally bounded. (Crypto can’t be broken)– Communicate using point-to point unreliable
channels.– Knows each others symmetric keys.
• Failure model is a hybrid failure model– Benign (Follow protocol)– Malevolent (Don’t follow protocol)– Faulty (malevolent and crashes and never recovers)
Q/U Overview• Clients update objects by issuing requests
stamped with object versions to version servers.• Version servers evaluate these requests.
– If the request is over an out of date version, the clients version is corrected and the request reissued
– If an out of date server is required to reach a quorum, it retrieves an object history from a group of other servers
– If the version matches the server version, of course, it is executed
• Since we are optimistic, we should correct for errors when objects are concurrently accessed.
Concurrency and Repair
• Concurrent access to an object may fail• Two operations
– Barrier• Create a barrier(a dummy request with larger timestamp),
that prevents any requests from being executed.
– Copy• Copy the object past the barrier so that it can be copied.
• Clients may repeatedly barrier each other, to combat this, an exponential backoff strategy is enforced.
Classification and Constraints
• Based on partial observations of the global system state, an operation may be– Established
• Complete (4b+1 <= order)
– Potential• Repairable (2b+1 <= order < 4b+1)
– Can be repaired using the copy and barrier strategy• Incomplete (Otherwise)
– Revert
– Order is the number of responses received by the client.
Example
Algorithm for increment.
3
2
1
1
2
1
2
3
4
5
Optimisations
• Do everything in single step.– Inline repair
• If an object is not contended, just sync it without a barrier phase.
– Cached object history set.• Cache object history to reduce round trips.
• Preferred Quorums– Always go to a quorum that already processed an
object. • Improves cache locality.• Reduces serialization overhead.
Optimizations contd..
• Reducing contention– Handling repeated requests.
• Store answers, send same answers for same queries.
– Retry and back-off policy.• Reduce contention on contended object.
– Round-robin strategy for non-preferred quorum.
Optimizations contd..
• Optimizing data structures.– Trade bandwidth for computation while
syncing objects.– Use digital signatures instead of Hmac’s for
authenticators. (large n)– Compact timestamps and replica histories.
Multi-Object Updates• In this case, servers lock their local copies, if
they approve the OHS, the update goes through• If not, a multi-object repair protocol goes through
– In this case, repair depends on the ability to establish all objects in the set
– Objects in the set are only repairable if all are repairable. If objects in the set that would be repairable are reclassified as incomplete.
Fault Scalability
More fault-scalability
Isolated vs Contending
NFSv3 metadata
Thank you
Two generals problem
• A city is in a valley.• The generals who wish to conquer it are
on 2 mountains, surrounding the city. • They have to co-ordinate on a time to
attack, by sending messages across the valley.
• Channel is unreliable, as messengers may be caught.
Solution
• Message acknowledgements do not work.• Replication is the only solution.
– Send 100 messengers instead of one.
Byzantine generals problem
• More general than two generals problem.• What if messengers are generals are
traitors?• They may manipulate messages.• All generals should reach consensus
(majority agreement).
Byzantine fault
• A process behaves in an inconsistent manner.
Properties of a solution
• A solution has to guarantee that all correct processes eventually reach a decision regarding the value of the order they have been given.
• All correct processes have to decide on the same value of the order they have been given.
• If the source process is a correct process, all processes have to decide on the value that was originally given by the source process.
Impossible in general case
Simplified Lamport’s algo
• The general sends his value to each lieutenant.
• Each lieutenant sends the value he receives to the other lieutenants. (Stage 1)
• The lieutenant waits till he receives the messages from all the lieutenant’s and acts on the majority agreement. (Stage 2)
Properties of the Algo.
• 3 * faulty processes < total processes.
Overview• Queries are read only methods• Updates modify an object• Methods exported take arguments and return
answers• Clients perform operations by issuing requests
to a quorum• A server receives a request. If it accepts it it
invokes a method• Each update creates a new object version
Overview
• The object version is kept with its logical timestamp in a version history called the replica history
• Servers return replica histories in response to requests
• Clients store replica histories in their object history set, an array of replicas indexed by server
Overview• Timestamps in these
histories are candidates for future operations
• Candidates are classified in order to determine which object version a method should be executed upon
Overview
• In non-optimistic operation, a client may need to perform a repair– Addressed later
• To perform an operation, a client first retrieves an object history set. The clients operation is conditioned on this set, which is transmitted with the operation.
Overview
• The client sends this operation to a quorum of servers.
• To promote efficiency, the client sends the request to a preferred quorum– Addressed later
• Single phase operation hinges on the availability of a preferred quorum, and on concurrency-free access.
Overview• Before executing a request, servers first validate
its integrity.• This is important, servers do not communicate
object histories directly to each other, so the client’s data must be validated.
• Servers use authenticators to do this, lists of HMACs that prevent malevolent nodes from fabricating replica histories.
• Servers cull replica histories from the conditioned on OHS that they cannot validate
Overview – the last bit
• Servers validate that they do not have a higher timestamp in their local replica histories
• Failing this, the client repairs• Passing this, the method is executed, and
the new timestamp created– Timestamps are crafted such that they always
increase in value
Preferred Quorums
• Traditional quorum systems use random quorums, but this means that servers frequently need to be synced– This is to distribute the load
• Preferred quorums choose to access servers with the most up to date data, assuring that syncs happen less often
Preferred Quorums• If a preferred quorum cannot be met, clients
probe for additional servers to add to the quorum– Authenticators make it impossible to forge object
histories for benign servers– The new host syncs with b+1 host servers, in
order to validate that the data is correct
• In the prototype, probing selects servers such that the load is distributed using a method parameterized on object ID and server ID
Concurrency and Repair
• Concurrent access to an object may fail• Two operations
– Barrier• Barrier candidates have no data associated with them, and
so are safe to select during periods of contention• Barrier advances the logical clock so as to prevent earlier
timestamps from completing
– Copy• Copies the latest object data past the barrier, so it can be
acted upon
Concurrency and Repair
• Clients may repeatedly barrier each other, to combat this, an exponential backoff strategy is enforced