HOME

TheInfoList



OR:

The Bellman–Ford algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
that computes
shortest path In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. The problem of finding the shortest path between two ...
s from a single source vertex to all of the other vertices in a weighted digraph. It is slower than
Dijkstra's algorithm Dijkstra's algorithm ( ) is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, a road network. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three ...
for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.Lecture 14
stanford.edu
The algorithm was first proposed by , but is instead named after
Richard Bellman Richard Ernest Bellman (August 26, 1920 – March 19, 1984) was an American applied mathematician, who introduced dynamic programming in 1953, and made important contributions in other fields of mathematics, such as biomathematics. He foun ...
and Lester Ford Jr., who published it in
1958 Events January * January 1 – The European Economic Community (EEC) comes into being. * January 3 – The West Indies Federation is formed. * January 4 ** Edmund Hillary's Commonwealth Trans-Antarctic Expedition completes the thir ...
and
1956 Events January * January 1 – The Anglo-Egyptian Sudan, Anglo-Egyptian Condominium ends in Sudan after 57 years. * January 8 – Operation Auca: Five U.S. evangelical Christian Missionary, missionaries, Nate Saint, Roger Youderian, E ...
, respectively. Edward F. Moore also published a variation of the algorithm in
1959 Events January * January 1 – Cuba: Fulgencio Batista flees Havana when the forces of Fidel Castro advance. * January 2 – Soviet lunar probe Luna 1 is the first human-made object to attain escape velocity from Earth. It reaches the ...
, and for this reason it is also sometimes called the Bellman–Ford–Moore algorithm. Negative edge weights are found in various applications of graphs. This is why this algorithm is useful. If a graph contains a "negative cycle" (i.e. a cycle whose edges sum to a negative value) that is reachable from the source, then there is no ''cheapest'' path: any path that has a point on the negative cycle can be made cheaper by one more
walk Walking (also known as ambulation) is one of the main gaits of terrestrial locomotion among legged animals. Walking is typically slower than running and other gaits. Walking is defined as an "inverted pendulum" gait in which the body vaults over ...
around the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the negative cycle.


Algorithm

Like
Dijkstra's algorithm Dijkstra's algorithm ( ) is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, a road network. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three ...
, Bellman–Ford proceeds by relaxation, in which approximations to the correct distance are replaced by better ones until they eventually reach the solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value and the length of a newly found path. However, Dijkstra's algorithm uses a
priority queue In computer science, a priority queue is an abstract data type similar to a regular queue (abstract data type), queue or stack (abstract data type), stack abstract data type. In a priority queue, each element has an associated ''priority'', which ...
to greedily select the closest vertex that has not yet been processed, and performs this relaxation process on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes ''all'' the edges, and does this , V, -1 times, where , V, is the number of vertices in the graph. In each of these repetitions, the number of vertices with correctly calculated distances grows, from which it follows that eventually all vertices will have their correct distances. This method allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra's algorithm. The intermediate answers depend on the order of edges relaxed, but the final answer remains the same. Bellman–Ford runs in O(, V, \cdot , E, )
time Time is the continuous progression of existence that occurs in an apparently irreversible process, irreversible succession from the past, through the present, and into the future. It is a component quantity of various measurements used to sequ ...
, where , V, and , E, are the number of vertices and edges respectively. function BellmanFord(''list'' vertices, ''list'' edges, ''vertex'' source) is ''// This implementation takes in a graph, represented as'' ''// lists of vertices (represented as integers ..n-1 and'' ''// edges, and fills two arrays (distance and predecessor)'' ''// holding the shortest path from the source to each vertex'' distance := ''list'' of size ''n'' predecessor := ''list'' of size ''n'' ''// Step 1: initialize graph'' for each vertex v in vertices do // Initialize the distance to all vertices to infinity distance := inf // And having a null predecessor predecessor := null // The distance from the source to itself is zero distance
ource The Ource () is a long river in northeastern France, a right tributary of the river Seine. Its source is in the Haute-Marne department, 2 km south of Poinson-lès-Grancey. It flows generally northwest. It joins the Seine at Bar-sur-Seine ...
:= 0 ''// Step 2: relax edges repeatedly'' repeat , V, −1 times: for each edge (u, v) with weight w in edges do if distance + w < distance then distance := distance + w predecessor := u ''// Step 3: check for negative-weight cycles'' for each edge (u, v) with weight w in edges do if distance + w < distance then predecessor := u ''// A negative cycle exists;'' ''// find a vertex on the cycle'' visited := ''list'' of size ''n'' initialized with false visited := true while ''not'' visited do visited := true u := predecessor ''// u is a vertex in a negative cycle,'' ''// find the cycle itself'' ncycle := v := predecessor while v != u do ncycle := concatenate( ncycle) v := predecessor error "Graph contains a negative-weight cycle", ncycle return distance, predecessor Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then for all edges, if the distance to the destination can be shortened by taking the edge, the distance is updated to the new lower value. The core of the algorithm is a loop that scans across all edges at every loop. For every i \leq , V, -1, at the end of the i-th iteration, from any vertex , following the predecessor trail recorded in yields a path that has a total weight that is at most , and further, is a lower bound to the length of any path from source to that uses at most edges. Since the longest possible path without a cycle can be , V, -1 edges, the edges must be scanned , V, -1 times to ensure the shortest path has been found for all nodes. A final scan of all the edges is performed and if any distance is updated, then a path of length , V, edges has been found which can only occur if at least one negative cycle exists in the graph. The edge (u, v) that is found in step 3 must be reachable from a negative cycle, but it isn't necessarily part of the cycle itself, which is why it's necessary to follow the path of predecessors backwards until a cycle is detected. The above pseudo-code uses a Boolean array (visited) to find a vertex on the cycle, but any cycle finding algorithm can be used to find a vertex on the cycle. A common improvement when implementing the algorithm is to return early when an iteration of step 2 fails to relax any edges, which implies all shortest paths have been found, and therefore there are no negative cycles. In that case, the complexity of the algorithm is reduced from O(, V, \cdot , E, ) to O(l \cdot , E, ) where l is the maximum length of a shortest path in the graph.


Proof of correctness

The correctness of the algorithm can be shown by induction: Lemma. After ''i'' repetitions of ''for'' loop, * if Distance(''u'') is not infinity, it is equal to the length of some path from ''s'' to ''u''; and * if there is a path from ''s'' to ''u'' with at most ''i'' edges, then Distance(u) is at most the length of the shortest path from ''s'' to ''u'' with at most ''i'' edges. Proof. For the base case of induction, consider i=0 and the moment before ''for'' loop is executed for the first time. Then, for the source vertex, source.distance = 0, which is correct. For other vertices ''u'', u.distance = infinity, which is also correct because there is no path from ''source'' to ''u'' with 0 edges. For the inductive case, we first prove the first part. Consider a moment when a vertex's distance is updated by v.distance := u.distance + uv.weight. By inductive assumption, u.distance is the length of some path from ''source'' to ''u''. Then u.distance + uv.weight is the length of the path from ''source'' to ''v'' that follows the path from ''source'' to ''u'' and then goes to ''v''. For the second part, consider a shortest path ''P'' (there may be more than one) from ''source'' to ''v'' with at most ''i'' edges. Let ''u'' be the last vertex before ''v'' on this path. Then, the part of the path from ''source'' to ''u'' is a shortest path from ''source'' to ''u'' with at most ''i-1'' edges, since if it were not, then there must be some strictly shorter path from ''source'' to ''u'' with at most ''i-1'' edges, and we could then append the edge ''uv'' to this path to obtain a path with at most ''i'' edges that is strictly shorter than ''P''—a contradiction. By inductive assumption, u.distance after ''i''−1 iterations is at most the length of this path from ''source'' to ''u''. Therefore, uv.weight + u.distance is at most the length of ''P''. In the ''ith'' iteration, v.distance gets compared with uv.weight + u.distance, and is set equal to it if uv.weight + u.distance is smaller. Therefore, after ''i'' iterations, v.distance is at most the length of ''P'', i.e., the length of the shortest path from ''source'' to ''v'' that uses at most ''i'' edges. If there are no negative-weight cycles, then every shortest path visits each vertex at most once, so at step 3 no further improvements can be made. Conversely, suppose no improvement can be made. Then for any cycle with vertices ''v'' ..., ''v'' 'k''−1 v distance <= v -1 (mod k)distance + v -1 (mod k) weight Summing around the cycle, the ''v'' 'i''distance and ''v'' 'i''−1 (mod ''k'')distance terms cancel, leaving 0 <= sum from 1 to k of v -1 (mod k) weight I.e., every cycle has nonnegative weight.


Finding negative cycles

When the algorithm is used to find shortest paths, the existence of negative cycles is a problem, preventing the algorithm from finding a correct answer. However, since it terminates upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications in which this is the target to be sought – for example in cycle-cancelling techniques in network flow analysis.


Applications in routing

A distributed variant of the Bellman–Ford algorithm is used in
distance-vector routing protocol A distance-vector routing protocol in data networks determines the best route for data packets based on distance. Distance-vector routing protocols measure the distance by the number of routers a packet has to pass; one router counts as one hop ...
s, for example the
Routing Information Protocol The Routing Information Protocol (RIP) is one of the oldest distance-vector routing protocols which employs the hop count as a routing metric. RIP prevents routing loops by implementing a limit on the number of hops allowed in a path from so ...
(RIP). The algorithm is distributed because it involves a number of nodes (routers) within an Autonomous system (AS), a collection of IP networks typically owned by an ISP. It consists of the following steps: # Each node calculates the distances between itself and all other nodes within the AS and stores this information as a table. # Each node sends its table to all neighboring nodes. # When a node receives distance tables from its neighbors, it calculates the shortest routes to all other nodes and updates its own table to reflect any changes. The main disadvantages of the Bellman–Ford algorithm in this setting are as follows: * It does not scale well. * Changes in
network topology Network topology is the arrangement of the elements (Data link, links, Node (networking), nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, ...
are not reflected quickly since updates are spread node-by-node. * Count to infinity if link or node failures render a node unreachable from some set of other nodes, those nodes may spend forever gradually increasing their estimates of the distance to it, and in the meantime there may be routing loops.


Improvements

The Bellman–Ford algorithm may be improved in practice (although not in the worst case) by the observation that, if an iteration of the main loop of the algorithm terminates without making any changes, the algorithm can be immediately terminated, as subsequent iterations will not make any more changes. With this early termination condition, the main loop may in some cases use many fewer than iterations, even though the worst case of the algorithm remains unchanged. The following improvements all maintain the O(, V, \cdot , E, ) worst-case time complexity. A variation of the Bellman–Ford algorithm described by , reduces the number of relaxation steps that need to be performed within each iteration of the algorithm. If a vertex ''v'' has a distance value that has not changed since the last time the edges out of ''v'' were relaxed, then there is no need to relax the edges out of ''v'' a second time. In this way, as the number of vertices with correct distance values grows, the number whose outgoing edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings in time for
dense graph In mathematics, a dense graph is a graph in which the number of edges is close to the maximal number of edges (where every pair of vertices is connected by one edge). The opposite, a graph with only a few edges, is a sparse graph. The distinctio ...
s. This variation can be implemented by keeping a collection of vertices whose outgoing edges need to be relaxed, removing a vertex from this collection when its edges are relaxed, and adding to the collection any vertex whose distance value is changed by a relaxation step. In China, this algorithm was popularized by Fanding Duan, who rediscovered it in 1994, as the "shortest path faster algorithm". described another improvement to the Bellman–Ford algorithm. His improvement first assigns some arbitrary linear order on all vertices and then partitions the set of all edges into two subsets. The first subset, ''Ef'', contains all edges (''vi'', ''vj'') such that ''i'' < ''j''; the second, ''Eb'', contains edges (''vi'', ''vj'') such that ''i'' > ''j''. Each vertex is visited in the order , relaxing each outgoing edge from that vertex in ''Ef''. Each vertex is then visited in the order , relaxing each outgoing edge from that vertex in ''Eb''. Each iteration of the main loop of the algorithm, after the first one, adds at least two edges to the set of edges whose relaxed distances match the correct shortest path distances: one from ''Ef'' and one from ''Eb''. This modification reduces the worst-case number of iterations of the main loop of the algorithm from to , V, /2.Cormen et al., 4th ed., Problem 22-1, p. 640. Another improvement, by , replaces the arbitrary linear order of the vertices used in Yen's second improvement by a
random permutation A random permutation is a sequence where any order of its items is equally likely at random, that is, it is a permutation-valued random variable of a set of objects. The use of random permutations is common in games of chance and in randomized alg ...
. This change makes the worst case for Yen's improvement (in which the edges of a shortest path strictly alternate between the two subsets ''Ef'' and ''Eb'') very unlikely to happen. With a randomly permuted vertex ordering, the expected number of iterations needed in the main loop is at most , V, /3.See Sedgewick'
web exercises
for ''Algorithms'', 4th ed., exercises 5 and 12 (retrieved 2013-01-30).
, at
Georgetown University Georgetown University is a private university, private Jesuit research university in Washington, D.C., United States. Founded by Bishop John Carroll (archbishop of Baltimore), John Carroll in 1789, it is the oldest Catholic higher education, Ca ...
, created an improved algorithm that with high probability runs in \tilde O(, V, ^\frac\cdot , E, )
time Time is the continuous progression of existence that occurs in an apparently irreversible process, irreversible succession from the past, through the present, and into the future. It is a component quantity of various measurements used to sequ ...
. Here, the \tilde O is a variant of
big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
that hides logarithmic factors.


Notes


References


Original sources

* * * * * * *


Secondary sources

* * * *, Fourth Edition. MIT Press, 2022. . Section 22.1: The Bellman–Ford algorithm, pp. 612–616. Problem 22–1, p. 640. * * * {{DEFAULTSORT:Bellman-Ford algorithm Graph algorithms Polynomial-time problems Articles with example C code Articles with example pseudocode Dynamic programming Graph distance