Dynamic Sampling Input:a graphical model with Gibbs distribution u a sample ~u,and an update(D,D) Output:X'~u'where u'is the new Gibbs distribution inference/learning tasks where the graphical model is changing dynamically ● video de-noising online learning with dynamic or streaming data sampling/inference/learning algorithms which adaptively and locally change the joint distribution stochastic gradient descent JSV algorithm for perfect matching
Dynamic Sampling Input: Output: a graphical model with Gibbs distribution µ a sample X ~ µ, and an update (D, �D) X’ ~ µ’ where µ’ is the new Gibbs distribution • inference/learning tasks where the graphical model is changing dynamically • video de-noising • online learning with dynamic or streaming data • sampling/inference/learning algorithms which adaptively and locally change the joint distribution • stochastic gradient descent • JSV algorithm for perfect matching
Dynamic Sampling Input:a graphical model with Gibbs distribution u a sample X~u,and an update(D,D) Output:X'~u'where u'is the new Gibbs distribution Goal: transform a x~u to a'~u by local changes Current sampling techniques are not powerful enough: u may be changed significantly by dynamic updates; Monte Carlo sampling does not know when to stop; notions such as mixing time give worst-case estimation
Dynamic Sampling Input: Output: a graphical model with Gibbs distribution µ a sample X ~ µ, and an update (D, �D) X’ ~ µ’ where µ’ is the new Gibbs distribution • µ may be changed significantly by dynamic updates; • Monte Carlo sampling does not know when to stop; • notions such as mixing time give worst-case estimation. Goal: transform a X ~ µ to a X’ ~ µ’ by local changes Current sampling techniques are not powerful enough:
Graphical Model instance of graphical model:=(V,E,g,) oD:variables ●EC2:constraints [g]=(0,1,...g-1:domain ●pe constraint( ●Φ=(中v)eyU(中e)eeE:factors Gibbs distribution u over all oE[g]v: 4(o)x中,(o,)中(o。) v∈V e∈E
Graphical Model • Gibbs distribution µ over all σ∈[q]V : • V : variables • E ⊂ 2V: constraints • [q] = {0,1, …, q-1}: domain • Φ = (�v)v∈V ∪ (�e)e∈E: factors μ(σ) ∝ ∏ v∈V ϕv(σv) ∏ e∈E ϕe(σe) constraint e V ϕv ϕe instance of graphical model: I = (V,E, [q], )
Notations instance of graphical model:Z =(V,E,g,) for DC VU2V vbl()(vD)() (involved variables) for RCV E(R)≌{e∈E|eCR} (internal constraints) 6(R)≌{e∈E\E(R)|e∩R≠☑}(boundary constraints) E+(R)≌{e∈E|enR卡O} (incident constraints) =E(R)U6(R)
���(D) ≜ (V ∩ D) ∪ (⋃e∈D∩E e) D ⊆ V ∪ 2V for for R ⊆ V E(R) ≜ {e ∈ E ∣ e ⊆ R} E+(R) ≜ {e ∈ E ∣ e ∩ R ≠ ∅} δ(R) ≜ {e ∈ E∖E(R) ∣ e ∩ R ≠ ∅} (involved variables) (internal constraints) (boundary constraints) = E(R) ∪ δ(R) (incident constraints) instance of graphical model: I = (V,E, [q], ) Notations
Dynamic Sampler Input:a graphical model with Gibbs distribution u a sample X~u,and an update(D,p) Output:X'~u'where u'is the new Gibbs distribution Upon receiving update(D,D): ● apply changes(D,p)to the current graphical model; ·R-bD)(VnDu(U.Ene ● while R≠☑: ●(X,R)←Resample(X,R);
Input: Output: a graphical model with Gibbs distribution µ a sample X ~ µ, and an update (D, �D) X’ ~ µ’ where µ’ is the new Gibbs distribution Dynamic Sampler • apply changes (D, �D) to the current graphical model; • • while R ≠ ∅ : • R ← ���(D) ≜ (V ∩ D) ∪ (⋃e∈D∩E e); (X, R) ← ��������(X, R); Upon receiving update (D, �D):