As a well-known numerical method, the extragradient method solves numerically the variational inequality of finding such that , for all . In this paper, we devote to solve the following hierarchical variational inequality Find such that , for all . We first suggest and analyze an implicit extragradient method for solving the hierarchical variational inequality . It is shown that the net defined by the suggested implicit extragradient method converges strongly to the unique solution of in Hilbert spaces. As a special case, we obtain the minimum norm solution of the variational inequality .
The variational inequality problem is to find such that
The set of solutions of the variational inequality problem is denoted by . It is well known that the variational inequality theory has emerged as an important tool in studying a wide class of obstacle, unilateral, and equilibrium problems; which arise in several branches of pure and applied sciences in a unified and general framework. Several numerical methods have been developed for solving variational inequalities and related optimization problems, see [1–24] and the references therein. In particular, Korpelevich's extragradient method which was introduced by Korpelevič  in 1976 generates a sequence via the recursion
where is the metric projection from onto , is a monotone operator, and is a constant. Korpelevich  proved that the sequence converges strongly to a solution of . Note that the setting of the space is Euclid space .
Recently, hierarchical fixed point problems and hierarchical minimization problems have attracted many authors' attention due to their link with some convex programming problems. See [25–32]. Motivated and inspired by these results in the literature, in this paper we are devoted to solve the following hierarchical variational inequality :
For this purpose, in this paper, we first suggest and analyze an implicit extragradient method. It is shown that the net defined by this implicit extragradient method converges strongly to the unique solution of in Hilbert spaces. As a special case, we obtain the minimum norm solution of the variational inequality .
Let be a real Hilbert space with inner product and norm , and let be a closed convex subset of . Recall that a mapping is called -inverse strongly monotone if there exists a positive real number such that
A mapping is said to be -contraction if there exists a constant such that
It is well known that, for any , there exists a unique such that
We denote by , where is called the metric projection of onto . The metric projection of onto has the following basic properties:
(i) for all ;
(ii) for every ;
(iii) for all , ;
(iv) for all , .
Such properties of will be crucial in the proof of our main results. Let be a monotone mapping of into . In the context of the variational inequality problem, it is easy to see from property (iii) that
We need the following lemmas for proving our main result.
Lemma 2.1 (see ).
Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strongly monotone, and let be a constant. Then, one has
In particular, if , then is nonexpansive.
Lemma 2.2 (see ).
Let be a nonempty closed convex subset of a real Hilbert space . Assume that the mapping is monotone and weakly continuous along segments, that is, weakly as . Then, the variational inequality
is equivalent to the dual variational inequality
3. Main Result
In this section, we will introduce our implicit extragradient algorithm and show its strong convergence to the unique solution of .
Letbe a closed convex subset of a real Hilbert space. Letbe an-inverse strongly monotone mapping. Letbe a (nonself) contraction with coefficient. For any, define a netas follows:
where is a constant.
Note the fact that is a possible nonself mapping. Hence, if we take , then (3.1) reduces to
We notice that the net defined by (3.1) is well defined. In fact, we can define a self-mapping as follows:
From Lemma 2.1, we know that if , the mapping is nonexpansive.
For any , we have
This shows that the mapping is a contraction. By Banach contractive mapping principle, we immediately deduce that the net (3.1) is well defined.
Suppose the solution set of is nonempty. Then the net generated by the implicit extragradient method (3.1) converges in norm, as , to the unique solution of the hierarchical variational inequality . In particular, if one takes that , then the net defined by (3.2) converges in norm, as , to the minimum-norm solution of the variational inequality .
Take that . Since , using the relation (2.4), we have . In particular, if we take , we obtain
From (3.1), we have
Noting that is nonexpansive, thus,
Therefore, is bounded and so are , . Since is -inverse strongly monotone, it is -Lipschitz continuous. Consequently, and are also bounded.
From (3.6),(2.5), and the convexity of the norm, we deduce
Therefore, we have
By the property (ii) of the metric projection , we have
where is some appropriate constant. It follows that
and hence (by (3.7))
which implies that
Since , we derive
Next, we show that the net is relatively norm-compact as . Assume that is such that as . Put and .
By the property (ii) of metric projection , we have
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Since , we have . Hence, also converges weakly to the same point .
Next we show that . We define a mapping by
Then is maximal monotone (see ). Let . Since and , we have . On the other hand, from , we have
Therefore, we have
Noting that , , and is Lipschitz continuous, we obtain . Since is maximal monotone, we have and hence .
Therefore we can substitute for in (3.20) to get
Consequently, the weak convergence of and to actually implies that strongly. This has proved the relative norm-compactness of the net as .
Now we return to (3.20) and take the limit as to get
In particular, solves the following VI
or the equivalent dual VI (see Lemma 2.2)
Therefore, . That is, is the unique solution in of the contraction . Clearly this is sufficient to conclude that the entire net converges in norm to as .
Finally, if we take that , then VI (3.28) is reduced to
This clearly implies that
Therefore, is the minimum-norm solution of .This completes the proof.
(1) Note that our Implicit Extragradient Algorithms (3.1) and (3.2) have strong convergence in an infinite dimensional Hilbert space.
The authors thank the referees for their comments and suggestions which improved the presentation of this paper. The first author was supported in part by Colleges and Universities, Science and Technology Development Foundation (20091003) of Tianjin and NSFC 11071279. The second author was supported in part by NSC 99-2221-E-230-006
Lions, J-L, Stampacchia, G: Variational inequalities. Communications on Pure and Applied Mathematics. 20, 493–519 (1967). Publisher Full Text
Liu, F, Nashed, MZ: Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Analysis. 6(4), 313–344 (1998). Publisher Full Text
Iusem, AN, Svaiter, BF: A variant of Korpelevich's method for variational inequalities with a new search strategy. Optimization. 42(4), 309–321 (1997). Publisher Full Text
Khobotov, EN: Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Computational Mathematics and Mathematical Physics. 27(5), 120–127 (1989)
Solodov, MV, Svaiter, BF: A new projection method for variational inequality problems. SIAM Journal on Control and Optimization. 37(3), 765–776 (1999). Publisher Full Text
Aslam Noor, M: Some developments in general variational inequalities. Applied Mathematics and Computation. 152(1), 199–277 (2004). Publisher Full Text
Yao, Y, Yao, J-C: On modified iterative method for nonexpansive mappings and monotone mappings. Applied Mathematics and Computation. 186(2), 1551–1558 (2007). Publisher Full Text
Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. Journal of Optimization Theory and Applications. 118(2), 417–428 (2003). Publisher Full Text
Yao, Y, Liou, Y-C, Yao, J-C: A new hybrid iterative algorithm for fixed-point problems, variational inequality problems, and mixed equilibrium problems. Fixed Point Theory and Applications. 2008, (2008)
Wang, S, Marino, G, Wang, F: Strong convergence theorems for a generalized equilibrium problem with a relaxed monotone mapping and a countable family of nonexpansive mappings in a Hilbert space. Fixed Point Theory and Applications. 2010, (2010)
Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory and Applications. 2010, (2010)
Peng, J-W, Wu, S-Y, Yao, J-C: A new iterative method for finding common solutions of a system of equilibrium problems, fixed-point problems, and variational inequalities. Abstract and Applied Analysis. 2010, (2010)
Zeng, L-C, Ansari, QH, Shyu, DS, Yao, J-C: Strong and weak convergence theorems for common solutions of generalized equilibrium problems and zeros of maximal monotone operators. Fixed Point Theory and Applications. 2010, (2010)
Cianciaruso, F, Colao, V, Muglia, L, Xu, H-K: On an implicit hierarchical fixed point approach to variational inequalities. Bulletin of the Australian Mathematical Society. 80(1), 117–124 (2009). Publisher Full Text
Moudafi, A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Problems. 23(4), 1635–1640 (2007). Publisher Full Text
Marino, G, Colao, V, Muglia, L, Yao, Y: Krasnoselski-Mann iteration for hierarchical fixed points and equilibrium problem. Bulletin of the Australian Mathematical Society. 79(2), 187–200 (2009). Publisher Full Text
Lu, X, Xu, H-K, Yin, X: Hybrid methods for a class of monotone variational inequalities. Nonlinear Analysis: Theory, Methods & Applications. 71(3-4), 1032–1041 (2009). PubMed Abstract | Publisher Full Text
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization. 14(5), 877–898 (1976). Publisher Full Text
Sabharwal, A, Potter, LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Transactions on Signal Processing. 46(9), 2345–2352 (1998). Publisher Full Text
Liu, X, Cui, Y: The common minimal-norm fixed point of a finite family of nonexpansive mappings. Nonlinear Analysis: Theory, Methods & Applications. 73(1), 76–83 (2010). PubMed Abstract | Publisher Full Text
Yao, Y, Chen, R, Xu, H-K: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Analysis: Theory, Methods & Applications. 72(7-8), 3447–3456 (2010). PubMed Abstract | Publisher Full Text