Residual networks For a flow f,the residual f(u,v) capacity c is: We can still inject up to c(u,v)- c(u,v) f(u,v)flow from u to v u 0 口Cf(u,)=c(u,)-f(u,) ←—c(),u) 0 cf(v,u)=c(v,u)+f(u,v) Residual network: 一c(u,)-f(u,)→ ▣Gr=(V,Ef),where u←c(,)+f(u,) ▣Ef={(u,)∈V×V:Cr(u,)>0}. We can withdraw f flow from u to v Now an augmenting path is first,and then inject up to c(v,u) just a path from s to t in the flow from v to u residual network. 11
Residual networks ◼ For a flow 𝑓, the residual capacity 𝑐𝑓 is: ❑ 𝑐𝑓(𝑢, 𝑣) = 𝑐(𝑢, 𝑣) − 𝑓(𝑢, 𝑣) ❑ 𝑐𝑓(𝑣,𝑢) = 𝑐(𝑣, 𝑢) + 𝑓(𝑢, 𝑣) ◼ Residual network: ❑ 𝐺𝑓 = (𝑉, 𝐸𝑓 ), where ❑ 𝐸𝑓 = { 𝑢, 𝑣 ∈ 𝑉 × 𝑉: 𝑐𝑓(𝑢, 𝑣) > 0}. ◼ Now an augmenting path is just a path from 𝑠 to 𝑡 in the residual network. 𝑢 𝑣 𝑓(𝑢, 𝑣) 𝑐(𝑢, 𝑣) 𝑢 𝑣 𝑐(𝑢, 𝑣) − 𝑓(𝑢, 𝑣) 𝑐(𝑣, 𝑢) + 𝑓(𝑢, 𝑣) We can still inject up to 𝑐(𝑢,𝑣) − 𝑓(𝑢, 𝑣) flow from 𝑢 to 𝑣 We can withdraw 𝑓 flow from 𝑢 to 𝑣 first, and then inject up to 𝑐(𝑣, 𝑢) flow from 𝑣 to 𝑢 𝑐(𝑣, 𝑢) 11
Residual networks The residual network gives the info of how we can get more flow from s to t in the graph. a And how much. ■ So we define an augmenting path to be a path from s to t in the residual network. ■ Now finding an augmenting path amounts to finding a path from s to t in the residual network, which we know how to do ▣e.g.BFS algorithm. 12
Residual networks ◼ The residual network gives the info of how we can get more flow from 𝑠 to 𝑡 in the graph. ❑ And how much. ◼ So we define an augmenting path to be a path from 𝑠 to 𝑡 in the residual network. ◼ Now finding an augmenting path amounts to finding a path from 𝑠 to 𝑡 in the residual network, ◼ which we know how to do ❑ e.g. BFS algorithm. 12
FORD-FULKERSON(G,s,t) ■ for each edge(u,v)∈E∥Initialization ▣f(u,)←0,f(),u)←0 ■Gf=G while there exists a path p from s to t in the residual network G ▣Cr(p)←min{cr(u,v):(u,v)isinp}∥nax to inject on p for each edge (u,v)in p /update flow if f(v,u)=0,/no backward flow on this edge 口f(u,)←f(u,)+cr(p) else if cr(p)≤f(v,u)/ /has backward flow,withdraw part ▣f(v,u)←f(v,W)-cr(p) ■ else /has backward flow,withdraw all,add forward flow 口f(u,v)←cr(p)-f(v,u) ▣f(),u)←0 Update the residual network Gf. 13
FORD-FULKERSON(𝐺, 𝑠,𝑡) ◼ for each edge 𝑢, 𝑣 ∈ 𝐸 // Initialization ❑ 𝑓(𝑢, 𝑣) ← 0, 𝑓(𝑣, 𝑢) ← 0 ◼ 𝐺𝑓 = 𝐺 ◼ while there exists a path 𝑝 from 𝑠 to 𝑡 in the residual network 𝐺𝑓 ❑ 𝑐𝑓 𝑝 ← min{𝑐𝑓 (𝑢, 𝑣): (𝑢, 𝑣) 𝑖𝑠 𝑖𝑛 𝑝} // max to inject on 𝑝 ❑ for each edge (𝑢, 𝑣) 𝑖𝑛 𝑝 // update flow ◼ if 𝑓(𝑣, 𝑢) = 0, // no backward flow on this edge ❑ 𝑓(𝑢,𝑣) ← 𝑓(𝑢, 𝑣) + 𝑐𝑓(𝑝) ◼ else if 𝑐𝑓(𝑝) ≤ 𝑓(𝑣, 𝑢) // has backward flow, withdraw part ❑ 𝑓(𝑣, 𝑢) ← 𝑓(𝑣, 𝑢) − 𝑐𝑓(𝑝) ◼ else // has backward flow, withdraw all, add forward flow ❑ 𝑓(𝑢,𝑣) ← 𝑐𝑓(𝑝) − 𝑓(𝑣,𝑢) ❑ 𝑓(𝑣, 𝑢) ← 0 ❑ Update the residual network 𝐺𝑓. 13