#### DMCA

## A STOCHASTIC FORWARD-BACKWARD SPLITTING METHOD FOR SOLVING MONOTONE INCLUSIONS IN HILBERT SPACES

Citations: | 1 - 0 self |

### Citations

1312 |
Nonlinear functional analysis and its applications, part 2 B: nonlinear monotone operators
- Zeidler, Boron
- 1990
(Show Context)
Citation Context ...wide applications in pure and applied sciences, and because they provide a convenient framework for a unified treatment of equilibrium problems, variational inequalities, and convex optimization, see =-=[5, 54]-=- and references therein. Let H be a real Hilbert space and let T : H → 2H be a set valued maximal monotone operator [5]. In this context, a key problem is to find an element w ∈ H such that 0 ∈ T (w).... |

1043 | A fast iterative shrinkage-thresholding algorithm for linear inverse problems
- Beck, Teboulle
- 2009
(Show Context)
Citation Context ... Since the seminal works [34, 43], forward-backward splitting methods have been considerably developed to be more flexible, to achieve better convergence properties, and to allow for numerical errors =-=[5, 17, 42, 6, 51, 16]-=-. In the important situation where only stochastic estimates of the operators A or B are available, convergence of the sequence in (1.2) has not been proved in the general case. To the best of our kno... |

952 |
A stochastic approximation method
- Robbins, Monro
- 1954
(Show Context)
Citation Context ...o special cases of convex optimization and variational inequalities, that will be studied in Section 5. The field of stochastic approximation theory began with the seminal papers of Robbins and Monro =-=[45]-=- and Kiefer and Wolfowitz [26]. After those papers, stochastic approximation algorithms were widely used in stochastic optimization, see e.g. [8, 21, 23, 29] and references therein. An improvement of ... |

541 | Pegasos: primal estimated sub-gradient solver for SVM - Shalev-Shwartz, Singer, et al. - 2007 |

528 |
Adaptive algorithms and stochastic approximations, volume 22
- Benveniste, Métivier, et al.
- 1990
(Show Context)
Citation Context ...ry began with the seminal papers of Robbins and Monro [45] and Kiefer and Wolfowitz [26]. After those papers, stochastic approximation algorithms were widely used in stochastic optimization, see e.g. =-=[8, 21, 23, 29]-=- and references therein. An improvement of the original stochastic approximation method, is proposed by [41] and [44]. Their method relies on the averaging of the trajectories and allow for larger ste... |

500 | Signal recovery by proximal forwardbackward splitting. Multiscale Modeling and Simulation
- Combettes, Wajs
- 2005
(Show Context)
Citation Context ... Since the seminal works [34, 43], forward-backward splitting methods have been considerably developed to be more flexible, to achieve better convergence properties, and to allow for numerical errors =-=[5, 17, 42, 6, 51, 16]-=-. In the important situation where only stochastic estimates of the operators A or B are available, convergence of the sequence in (1.2) has not been proved in the general case. To the best of our kno... |

407 |
Stochastic Approximation and Recursive Algorithms and Applications
- Kushner, Yin
- 2003
(Show Context)
Citation Context ...ry began with the seminal papers of Robbins and Monro [45] and Kiefer and Wolfowitz [26]. After those papers, stochastic approximation algorithms were widely used in stochastic optimization, see e.g. =-=[8, 21, 23, 29]-=- and references therein. An improvement of the original stochastic approximation method, is proposed by [41] and [44]. Their method relies on the averaging of the trajectories and allow for larger ste... |

392 | Gradient methods for minimizing composite objective function
- Nesterov
- 2007
(Show Context)
Citation Context ... Since the seminal works [34, 43], forward-backward splitting methods have been considerably developed to be more flexible, to achieve better convergence properties, and to allow for numerical errors =-=[5, 17, 42, 6, 51, 16]-=-. In the important situation where only stochastic estimates of the operators A or B are available, convergence of the sequence in (1.2) has not been proved in the general case. To the best of our kno... |

364 |
Stochastic approximation methods for constrained and unconstrained systems
- Kushner, Clark
- 1978
(Show Context)
Citation Context ...p convergence. It is worth noting that FOBOS algorithm can be applied also when F is not differentiable. In the optimization setting, the study of almost sure convergence has a long history, see e.g. =-=[46, 28, 8, 13]-=- and references therein. Recent results on almost sure convergence of projected stochastic gradient algorithm can be found in [7, 38], under rather technical assumptions. Our results are generalizatio... |

334 |
Problem complexity and method efficiency in optimization
- Nemirovski, Yudin
- 1983
(Show Context)
Citation Context ...c approximation algorithms were widely used in stochastic optimization, see e.g. [8, 21, 23, 29] and references therein. An improvement of the original stochastic approximation method, is proposed by =-=[41]-=- and [44]. Their method relies on the averaging of the trajectories and allow for larger step-sizes. In the special case of composite minimization problems, namely A = ∂g, for some proper, lower semic... |

278 |
Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces
- Bauschke, L
- 2010
(Show Context)
Citation Context ...wide applications in pure and applied sciences, and because they provide a convenient framework for a unified treatment of equilibrium problems, variational inequalities, and convex optimization, see =-=[5, 54]-=- and references therein. Let H be a real Hilbert space and let T : H → 2H be a set valued maximal monotone operator [5]. In this context, a key problem is to find an element w ∈ H such that 0 ∈ T (w).... |

266 | Stochastic approximation approach to stochastic programming
- Juditsky, Lan, et al.
(Show Context)
Citation Context ...† LCSL, Istituto Italiano di Tecnologia and Massachusetts Institute of Technology, Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139, USA, (Silvia.Villa@iit.it, Cong.Bang@iit.it) 1 see e.g. =-=[40]-=-. Our convergence results on the sequence of the iterates do not require averaging, and allow the choice of step-sizes of the form n−θ with θ ∈ ]0, 1]. When restricting to sparsity based optimization,... |

251 |
Splitting algorithms for the sum of two nonlinear operators
- Lions, Mercier
- 1979
(Show Context)
Citation Context ...method is the forward-backward splitting algorithm [5, 14]: (∀n ∈ N) wn+1 = JγnA ◦ (I − γnB)wn, (1.2) where γn ∈ ]0,+∞[ and JγnA = (I + γnA) −1 is the resolvent operator of A. Since the seminal works =-=[34, 43]-=-, forward-backward splitting methods have been considerably developed to be more flexible, to achieve better convergence properties, and to allow for numerical errors [5, 17, 42, 6, 51, 16]. In the im... |

237 | A stochastic estimation of the maximum of a regression function - Kiefer, Wolfowitz - 1952 |

192 |
Acceleration of stochastic approximation by averaging
- Polyak, Juditsky
- 1992
(Show Context)
Citation Context ...mation algorithms were widely used in stochastic optimization, see e.g. [8, 21, 23, 29] and references therein. An improvement of the original stochastic approximation method, is proposed by [41] and =-=[44]-=-. Their method relies on the averaging of the trajectories and allow for larger step-sizes. In the special case of composite minimization problems, namely A = ∂g, for some proper, lower semicontinuous... |

151 |
Monotone (nonlinear) operators in Hilbert space
- Minty
- 1962
(Show Context)
Citation Context ...ates in expectation in the strongly monotone case, as well as almost sure convergence results under weaker assumptions. 1. Introduction. Maximal monotone operators have been extensively studied since =-=[37]-=-, largely because they have wide applications in pure and applied sciences, and because they provide a convenient framework for a unified treatment of equilibrium problems, variational inequalities, a... |

140 |
Monte Carlo sampling methods,
- Shapiro
- 2003
(Show Context)
Citation Context ... optimization problems, there is a line of research studying stochastic algorithms for variational inequalities on finite dimensional spaces. The sample average approximation has been studied e.g. in =-=[50, 12]-=- (see also references therein), and a mirror proximal stochastic approximation algorithm to solve variational inequalities corresponding to a maximal monotone operator has been proposed in [25]. A sto... |

138 |
Variational inequalities
- Lions, Stampacchia
- 1967
(Show Context)
Citation Context ...he subdifferential of G ∈ Γ0(H). Problem 5.1. Let B : H → H be a β-cocoercive operator, for some β ∈ ]0,+∞[, let G be a function in Γ0(H). The problem is to solve the following variational inequality =-=[33, 54, 5]-=- find w ∈ H such that (∀w ∈ H) 〈w − w | Bw〉+G(w) ≤ G(w), (5.1) under the assumption that (5.1) has at least one solution. In the setting of Problem 5.1, let (Bn)n∈N∗ be a random process taking values ... |

133 | Solving monotone inclusions via compositions of nonexpansive averaged operators
- Combettes
- 2004
(Show Context)
Citation Context ...ooth convex function. There is a vast literature on algorithmic schemes for solving (1.1), that separate the contribution of A and B. One well-known method is the forward-backward splitting algorithm =-=[5, 14]-=-: (∀n ∈ N) wn+1 = JγnA ◦ (I − γnB)wn, (1.2) where γn ∈ ]0,+∞[ and JγnA = (I + γnA) −1 is the resolvent operator of A. Since the seminal works [34, 43], forward-backward splitting methods have been con... |

133 | Dual averaging methods for regularized stochastic learning and online optimization
- Xiao
- 2010
(Show Context)
Citation Context ...lues which is optimal both with respect to the smooth component and the non-smooth term. Similar accelerated proximal gradient algorithms have been also studied in the machine learning community, see =-=[30, 52]-=-, and also [10, 48, 49, 55]. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in [20]. With respect t... |

129 | Efficient online and batch learning using forward backward splitting
- Duchi, Singer
- 2009
(Show Context)
Citation Context ...munity, see [30, 52], and also [10, 48, 49, 55]. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in =-=[20]-=-. With respect to [20], we allow for an additional relaxation step, which in practice can speed up convergence. It is worth noting that FOBOS algorithm can be applied also when F is not differentiable... |

126 |
Fonctions convexes duales et points proximaux dans un espace hilbertien
- Moreau
- 1962
(Show Context)
Citation Context ... recall that JA is well defined and single valued [37], and can therefore be identified with an operator JA : H → H. When A = ∂G for some G ∈ Γ0(H), then JA coincides with the proximity operator of G =-=[39]-=-, which is defined as proxG : H → H : w 7→ argmin v∈H G(v) + 1 2 ‖w − v‖2. (3.6) Finally, let β ∈ R. We define the family of functions ϕβ : ]0,+∞[→ R : t 7→ { β−1(tβ − 1) if β 6= 0; log t if β = 0. Th... |

86 | SVM optimization: inverse dependence on training set size
- Shalev-Shwartz, Srebro
- 2008
(Show Context)
Citation Context ...al both with respect to the smooth component and the non-smooth term. Similar accelerated proximal gradient algorithms have been also studied in the machine learning community, see [30, 52], and also =-=[10, 48, 49, 55]-=-. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in [20]. With respect to [20], we allow for an add... |

79 |
Regularisation d’inéquations variationelles par approximations successives, Revue Française d’Automatique, Informatique et Recherche Opérationnelle
- Martinet
- 1970
(Show Context)
Citation Context ...oof. The results follow from Theorem 4.7. When G is the indicator function of a non-empty, closed, convex subset C of H, Problem 5.1 reduces to the problem of solving a classic variational inequality =-=[35, 33]-=-, namely to find w such that (∀w ∈ C) 〈Bw | w − w〉 ≤ 0. (5.3) Proximal algorithms are often used to solve this problem, see [5, Chapter 25] and references therein. When B is accessible only through a ... |

73 |
A convergence theorem for nonnegative almost supermartingales and some applications, Optimizing Methods
- Robbins, Siegmund
- 1971
(Show Context)
Citation Context ...p convergence. It is worth noting that FOBOS algorithm can be applied also when F is not differentiable. In the optimization setting, the study of almost sure convergence has a long history, see e.g. =-=[46, 28, 8, 13]-=- and references therein. Recent results on almost sure convergence of projected stochastic gradient algorithm can be found in [7, 38], under rather technical assumptions. Our results are generalizatio... |

65 | An optimal method for stochastic composite optimization
- Lan
(Show Context)
Citation Context ...ion F : H → R, stochastic implementations of forward-backward splitting algorithms, and more generally of first order methods, received much attention and have been recently studied in several papers =-=[18, 31, 32, 19]-=-. In particular, [31] proposes an accelerated method and derives a rate of convergence for the objective function values which is optimal both with respect to the smooth component and the non-smooth t... |

54 |
Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,”
- Passty
- 1979
(Show Context)
Citation Context ...method is the forward-backward splitting algorithm [5, 14]: (∀n ∈ N) wn+1 = JγnA ◦ (I − γnB)wn, (1.2) where γn ∈ ]0,+∞[ and JγnA = (I + γnA) −1 is the resolvent operator of A. Since the seminal works =-=[34, 43]-=-, forward-backward splitting methods have been considerably developed to be more flexible, to achieve better convergence properties, and to allow for numerical errors [5, 17, 42, 6, 51, 16]. In the im... |

49 | Tsitsiklis, “Gradient convergence in gradient methods,”
- Bertsekas, N
- 2000
(Show Context)
Citation Context ...(4.3) and condition (A4) becomes∑ n∈N∗ λnγn = +∞ and ∑ n∈N∗ λnγ 2 n < +∞. The latter are the usual conditions required on the stepsize in the study of stochastic gradient descent algorithms (see e.g. =-=[9]-=-). These conditions guarantee a sufficient, but not too fast, decrease of the stepsize length. 4.2. Almost sure convergence. We now state our first result for the general setting, collecting some basi... |

47 | Non-asymptotic analysis of stochastic approximation algorithms for machine learning
- Bach, Moulines
(Show Context)
Citation Context ...sion of Proposition 3.3 in the finite dimensional setting can be found in [22]. The following lemma establishes a convergence rate for numerical sequences satisfying a given recursive inequality; see =-=[3]-=- and [47] for a proof. Lemma 3.4. Let α be in ]0, 1], and let c and τ be in ]0,+∞[, let (ηn)n∈N∗ be the sequence defined by (∀n ∈ N∗) ηn = cn −α. Let (sn)n∈N∗ be such that (∀n ∈ N∗) 0 ≤ sn+1 ≤ (1 − ηn... |

36 | Solving variational inequalities with stochastic mirror-prox algorithm
- Juditsky, Nemirovski, et al.
(Show Context)
Citation Context ...in [50, 12] (see also references therein), and a mirror proximal stochastic approximation algorithm to solve variational inequalities corresponding to a maximal monotone operator has been proposed in =-=[25]-=-. A stochastic iterative proximal method has been proposed in [27], and almost sure convergence properties of a stochastic forward-backward splitting algorithm for solving strongly monotone variationa... |

35 |
On-line learning for very large data sets
- Bottou, Cun
(Show Context)
Citation Context ...al both with respect to the smooth component and the non-smooth term. Similar accelerated proximal gradient algorithms have been also studied in the machine learning community, see [30, 52], and also =-=[10, 48, 49, 55]-=-. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in [20]. With respect to [20], we allow for an add... |

32 | Accelerated gradient methods for stochastic optimization and online learning
- Hu, Kwok, et al.
- 2009
(Show Context)
Citation Context ...lues which is optimal both with respect to the smooth component and the non-smooth term. Similar accelerated proximal gradient algorithms have been also studied in the machine learning community, see =-=[30, 52]-=-, and also [10, 48, 49, 55]. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in [20]. With respect t... |

22 | Nonlinear monotone operators and convex sets in Banach spa· ees - BROWDER - 1965 |

22 | Stochastic approximation approaches to the stochastic variational inequality problem,”
- Jiang, Xu
- 2008
(Show Context)
Citation Context ...hod has been proposed in [27], and almost sure convergence properties of a stochastic forward-backward splitting algorithm for solving strongly monotone variational inequalities has been studied 2 in =-=[24]-=-. Our results are a generalization of this kind of analysis to the monotone inclusion case. Notation. Throughout, (Ω,A,P) is a probability space, N∗ = N\{0}, H is a real separable Hilbert space, and 2... |

18 | Accelerated and inexact forward-backward algorithms
- Villa, Salzo, et al.
(Show Context)
Citation Context |

17 | Multi-stage convex relaxation for learning with sparse regularization
- Zhang
- 1929
(Show Context)
Citation Context ...al both with respect to the smooth component and the non-smooth term. Similar accelerated proximal gradient algorithms have been also studied in the machine learning community, see [30, 52], and also =-=[10, 48, 49, 55]-=-. When restricted to the composite optimization case, our stochastic forward-backward splitting algorithm is related to the FOBOS algorithm presented in [20]. With respect to [20], we allow for an add... |

15 | Asymptotic Properties of Some Projection-Based RobbinsMonro Procedures in a Hilbert Space.
- Chen, White
- 2002
(Show Context)
Citation Context ...p convergence. It is worth noting that FOBOS algorithm can be applied also when F is not differentiable. In the optimization setting, the study of almost sure convergence has a long history, see e.g. =-=[46, 28, 8, 13]-=- and references therein. Recent results on almost sure convergence of projected stochastic gradient algorithm can be found in [7, 38], under rather technical assumptions. Our results are generalizatio... |

15 | On stochastic gradient and subgradient methods with adaptive steplength sequences
- Yousefian, Nedic, et al.
(Show Context)
Citation Context ...∈ N∗) JγnA = proxγnG. In the case when G is the indicator function of a non empty closed convex set and (∀n ∈ N∗) λn = 1, a similar result on the rate of convergence of E[‖wn−w‖] has been obtained in =-=[53]-=-, under similar assumptions to (A1), . . . , (A4) for the case where L is strongly convex, under the additional assumption of boundedness of the conditional expectations of (‖Bn −∇L(wn)‖ 2)n∈N∗ . Coro... |

7 | On stochastic proximal gradient algorithms,
- Atchade, Fort, et al.
- 2014
(Show Context)
Citation Context ...pers dealing with the infinite dimensional setting. Our results about convergence rates in expectation are a generalization to monotone inclusions of [3, Section 3] (see also the very recent preprint =-=[1]-=-). Going beyond optimization problems, there is a line of research studying stochastic algorithms for variational inequalities on finite dimensional spaces. The sample average approximation has been s... |

7 | First order methods of smooth convex optimization with inexact oracle”, CORE Discussion Paper 2011/02
- Devolder, Glineur, et al.
- 2011
(Show Context)
Citation Context ...ion F : H → R, stochastic implementations of forward-backward splitting algorithms, and more generally of first order methods, received much attention and have been recently studied in several papers =-=[18, 31, 32, 19]-=-. In particular, [31] proposes an accelerated method and derives a rate of convergence for the objective function values which is optimal both with respect to the smooth component and the non-smooth t... |

6 | Variable metric quasi-Fejér monotonicity,”
- Combettes, Vu
- 2013
(Show Context)
Citation Context |

6 | A sparsity preserving stochastic gradient method for composite optimization. Manuscript,
- Lin, Chen, et al.
- 2011
(Show Context)
Citation Context ...re averaging, and allow the choice of step-sizes of the form n−θ with θ ∈ ]0, 1]. When restricting to sparsity based optimization, avoiding aceraging is relevant to preserve sparsity of the solutions =-=[32]-=-. A ubiquitous assumption in the stochastic approximation literature is boundedness (of some kind) of the stochastic estimates. A contribution of our approach is a relaxation of this requirement: we u... |

5 | Ergodic mirror descent
- Duchi, Agarwal, et al.
(Show Context)
Citation Context ...ion F : H → R, stochastic implementations of forward-backward splitting algorithms, and more generally of first order methods, received much attention and have been recently studied in several papers =-=[18, 31, 32, 19]-=-. In particular, [31] proposes an accelerated method and derives a rate of convergence for the objective function values which is optimal both with respect to the smooth component and the non-smooth t... |

5 |
On the method of generalized stochastic gradients and quasi-Fejér sequences. Cybernetics 5
- Ermol’ev
- 1969
(Show Context)
Citation Context ...ry began with the seminal papers of Robbins and Monro [45] and Kiefer and Wolfowitz [26]. After those papers, stochastic approximation algorithms were widely used in stochastic optimization, see e.g. =-=[8, 21, 23, 29]-=- and references therein. An improvement of the original stochastic approximation method, is proposed by [41] and [44]. Their method relies on the averaging of the trajectories and allow for larger ste... |

5 | Regularized iterative stochastic approximation methods for stochastic variational inequality problems
- Koshal, Nedic, et al.
(Show Context)
Citation Context ...tochastic approximation algorithm to solve variational inequalities corresponding to a maximal monotone operator has been proposed in [25]. A stochastic iterative proximal method has been proposed in =-=[27]-=-, and almost sure convergence properties of a stochastic forward-backward splitting algorithm for solving strongly monotone variational inequalities has been studied 2 in [24]. Our results are a gener... |

5 | Almost sure convergence of stochastic gradient processes with matrix step sizes
- Monnez
(Show Context)
Citation Context ...dy of almost sure convergence has a long history, see e.g. [46, 28, 8, 13] and references therein. Recent results on almost sure convergence of projected stochastic gradient algorithm can be found in =-=[7, 38]-=-, under rather technical assumptions. Our results are generalizations to monotone inclusions of the analysis of the stochastic projected subgradient algorithm in [4]. Note that the latter is also one ... |

5 | Convergence of stochastic proximal gradient algorithm,
- Rosasco, Villa, et al.
- 2014
(Show Context)
Citation Context ...ties, we obtain an additional convergence result without imposing stronger monotonicity properties on B, which requires averaging of the iterates. The present paper extends a short conference version =-=[47]-=- restricted to the minimization case. The paper is organized as follows. We first review related work in Section 2. Section 3 collects some preliminaries and Section 4 contains the main results of the... |

4 |
Nonstationary stochastic programming problems.
- Gaivoronski
- 1978
(Show Context)
Citation Context |

3 |
Hilbert-valued perturbed subgradient algorithms
- Barty, Roy, et al.
(Show Context)
Citation Context ...ient algorithm can be found in [7, 38], under rather technical assumptions. Our results are generalizations to monotone inclusions of the analysis of the stochastic projected subgradient algorithm in =-=[4]-=-. Note that the latter is also one of the few papers dealing with the infinite dimensional setting. Our results about convergence rates in expectation are a generalization to monotone inclusions of [3... |

3 |
Random fejér and quasi-fejér sequences
- Ermol’ev, Tuniev
- 1968
(Show Context)
Citation Context ...t β ∈ R. We define the family of functions ϕβ : ]0,+∞[→ R : t 7→ { β−1(tβ − 1) if β 6= 0; log t if β = 0. The following notions and results will be required in the following sections. Definition 3.2. =-=[22]-=- Let S be a non-empty subset of H. Then, 3 (i) A sequence (wn)n∈N∗ in H is quasi-Fejér monotone with respect to S if there exists (εn)n∈N∗ ∈ ℓ 1 +(N ∗) such that (∀w ∈ S)(∀n ∈ N∗) ‖wn+1 − w‖ 2 ≤ ‖wn ... |

2 | Almost sure convergence of a stochastic approximation process in a convex set
- Bennar, Monnez
(Show Context)
Citation Context ...dy of almost sure convergence has a long history, see e.g. [46, 28, 8, 13] and references therein. Recent results on almost sure convergence of projected stochastic gradient algorithm can be found in =-=[7, 38]-=-, under rather technical assumptions. Our results are generalizations to monotone inclusions of the analysis of the stochastic projected subgradient algorithm in [4]. Note that the latter is also one ... |

2 |
Semimartingales, volume 2 of de Gruyter Studies in Mathematics. Walter de Gruyter
- Métivier
- 1982
(Show Context)
Citation Context ...ts (namely, αn = 0 in assumption (A2)). One exception is [4], dealing with a stochastic projected subgradient algorithm on a Hilbert space. Our proof is based on quasi-martingale techniques (see e.g. =-=[36]-=-). Depending on the monotonicity properties of the operator B, we get two different convergence properties. Theorem 4.7. Suppose that (A1), (A2), (A3) and (A4) are satisfied. Let (wn)n∈N∗ be the seque... |

1 |
Méthodes numériques, Mâıtrise de Mathématiques et Applications Fondamentales
- Masson, York-Barcelona
- 1976
(Show Context)
Citation Context ...ction values can be studied, in the general case of monotone inclusions such a quantity cannot be defined. In some cases, for instance for variational inequalities, a merit function can be introduced =-=[2]-=-. This merit function has been used to quantify the inaccuracy of an approximation of the solution in [25]. 5. Special cases. In this section, we study two special instances of Problem 4.1, namely var... |

1 |
Stochastic variational inequalities:residual minimization smoothing sample average approximations
- Chen, Wets, et al.
(Show Context)
Citation Context ... optimization problems, there is a line of research studying stochastic algorithms for variational inequalities on finite dimensional spaces. The sample average approximation has been studied e.g. in =-=[50, 12]-=- (see also references therein), and a mirror proximal stochastic approximation algorithm to solve variational inequalities corresponding to a maximal monotone operator has been proposed in [25]. A sto... |