Research

Strong convergence of relaxed hybrid steepest-descent methods for triple hierarchical constrained optimization

L C Zeng1, M M Wong2* and J C Yao3

Author Affiliations

1 Department of Mathematics, Shanghai Normal University, and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China

2 Department of Applied Mathematics, Chung Yuan Christian University, Chung Li 32023, Taiwan

3 Center for General Education, Kaohsiung Medical University, Kaohsiung 807, Taiwan

For all author emails, please log on.

Fixed Point Theory and Applications 2012, 2012:29 doi:10.1186/1687-1812-2012-29

 Received: 16 June 2011 Accepted: 27 February 2012 Published: 27 February 2012

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Up to now, a large number of practical problems such as signal processing and network resource allocation have been formulated as the monotone variational inequality over the fixed point set of a nonexpansive mapping, and iterative algorithms for solving these problems have been proposed. The purpose of this article is to investigate a monotone variational inequality with variational inequality constraint over the fixed point set of one or finitely many nonexpansive mappings, which is called the triple-hierarchical constrained optimization. Two relaxed hybrid steepest-descent algorithms for solving the triple-hierarchical constrained optimization are proposed. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are included.

AMS Subject Classifications: 49J40; 65K05; 47H09.

Keywords:
triple-hierarchical constrained optimization; variational inequality; monotone operator; relaxed hybrid steepest-descent method; nonexpansive mapping; fixed point; strong convergence

1 Introduction

Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ∥ · ∥, let C be a nonempty closed convex subset of H and let R be the set of all real numbers. For a given nonlinear operator A : H H, the following classical variational inequality problem is formulated as finding a point x* ∈ C such that

(1.1)

The set of solutions of problem (1.1) is denoted by VI(C,A). Variational inequalities were initially studied by Stampacchia [1] and ever since have been widely studied, since they cover as diverse disciplines as partial differential equations, optimal control, optimization, mathematical programming, mechanics, and finance. On the other hand, a number of mathematical programs and iterative algorithms have been developed to resolve complex real world problems. In particular, monotone variational inequalities with a fixed point constraint [2-4] include such practical problems as signal recovery [3], beamforming [5], and power control [6], and many iterative algorithms for solving them have been presented.

The constraint set has been defined in [3,5] as the intersection of finite, closed, and convex subsets, C0 and Ci (i = 1,2,...,m), of a real Hilbert space, and is represented as the fixed point set of the direct product mapping composed of the metric projections onto the Cis. The case, in which the intersection of the Cis is empty, has been considered in [2,6]. When C0 is the absolute set, for which the condition must be satisfied, the constraint set is defined as the subset of C0 with the elements closet to the Cis (i = 1,2,... ,m) in terms of the norm. This set is represented as the fixed point set of the mapping composed of the metric projections onto the Cis [[2], Proposition 4.2]. Iterative algorithms have been presented in [2-4] for the convex optimization problem with a fixed point constraint along with proof that these algorithms converge strongly to the unique solution of problems with a strongly monotone operator. The strong monotonicity condition guarantees the uniqueness of the solution. A hierarchical fixed point problem, equivalent to the variational inequality for a monotone operator over the fixed point set, has been discussed [7,8] along with iterative algorithms for solving it. The solution presented [7,8] is not always unique, so that there may be many solutions to the problem. In that case, a solution, that results in practical systems and networks being more stable and reliable, must be found from among candidate solutions. Hence, it would be reasonable to identify the unique minimizer of an appropriate objective function over the hierarchical fixed point constraint. Very recently, related iterative methods and their convergence analysis for solving hierarchical fixed point problems, hierarchical optimization problems and hierarchical variational inequality problems can be found in [9-16].

Let T : H H be a self-mapping on H. We denote by Fix(T) the set of fixed points of T. A mapping T : H H is called L-Lipschitz continuous if there exists a constant L ≥ 0 such that

(1.2)

In particular, if L ∈ [0,1), T is called a contraction; if L = 1, T is called a nonexpansive mapping. A mapping A : H H is called α-inverse strongly monotone if there exists α > 0 such that

(1.3)

Obviously, every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping; see, e.g., [17].

In 2001, Yamada [2] introduced a hybrid steepest-descent method for finding an element of VI(C, F). His idea is stated now. Assume that C is the fixed point set of a nonexpansive mapping T : H H; that is,

Support that F is η-strongly monotone and κ-Lipschitz continuous with constants η,κ > 0. Take a fixed number μ ∈ (0, 2η/κ2) and a sequence {λn} ⊂ (0,1) satisfying the conditions below:

(L1) limn→∞ λn = 0;

(L2) ;

(L3) .

Starting with an arbitrary initial guess x0 H, one can generate a sequence {un} by the following algorithm:

(1.4)

Then, Yamada [2] proved that {un} converges strongly to the unique element of VI(C, F). In the case where C is expressed as the intersection of the fixed-point sets of N nonexpansive mappings Ti : H H with N ≥ 1 an integer, Yamada [2] proposed another algorithm,

(1.5)

where T[k] := TkmodN, for integer k ≥ 1, with the mod function taking values in the set {1,2,..., N} [i.e., if k = jN + q for some integers j ≥ 0 and 0 ≤ q < N, then T[k] = TN if q = 0 and T[k] = Tq if 1 < q < N], 0 where μ ∈ (0, 2η/κ2) and where the sequence {λn} of parameters satisfies conditions (L1), (L2), and (L4),

(L4) is convergent.

Under these conditions, Yamada [2] proved the strong convergence of {un} to the unique element of VI(C,F).

In 2003, Xu and Kim [18] continued the convergence study of the hybrid steepest-descent algorithms (1.4) and (1.5). The major contribution is that the strong convergence of the algorithms (1.4) and (1.5) holds with the condition (L3) replaced by the condition

(L3)' limn→∞ λn/λn+1 = 1, or equivalently, limn→∞(λn - λn+1)/λn+1 = 0, and with condition (L4) replaced by the condition

(L4)' limn→∞ λn/λn+N = 1, or equivalently, limn→∞(λn - λn+N)/λn+N = 0.

Theorem XK1 (see [[18], Theorem 3.1]). Assume that 0 < μ < 2η/κ2. Assume also that the control conditions (L1), (L2), and (L3)' hold for {λn}. Then, the sequence {un} generated by algorithm (1.4) converges strongly to the unique element u* of VI(C, F).

Theorem XK2 (see [[18], Theorem 3.2]). Let μ ∈ (0,2η/κ2) and let conditions (L1), (L2), and (L4)' be satisfied. Assume in addition that

Then, the sequence {un} generated by algorithm (1.5) converges in norm to the unique element u* of VI(C,F).

Recall the variational inequality for a monotone operator A1 : H H over the fixed point set of a nonexpansive mapping T : H H:

where . Very recently Iiduka [19] introduced the following monotone variational inequality with the variational inequality constraint over the fixed point set of a nonexpansive mapping:

Problem I (see [[19], Problem 3.1]). Assume that

(i) T : H H is a nonexpansive mapping with ;

(ii) A1 : H H is α-inverse strongly monotone;

(iii) A2: H H is β-strongly monotone and L-Lipschitz continuous, that is, there are constants β, L > 0 such that

for all x, y H;

(iv) .

Then the objective is to

Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problem, it is referred to as a triple-hierarchical constrained optimization problem (THCOP). He presented some examples of the THCOP and proposed an iterative algorithm for finding solutions of such problem.

Algorithm I (see [[19], Algorithm 4.1]). Let T : H H and Ai : H H (i = 1, 2) satisfy Assumptions (i)-(iv) in Problem I. The following steps are presented for solving Problem I.

Step 0. Take , and μ > 0, choose x0 H arbitrarily, and let n := 0.

Step 1. Given xn H, compute xn+1 H as

Update n := n + 1 and go to Step 1.

The convergence analysis of the proposed algorithm was also studied in [19]. The following strong convergence theorem is established for Algorithm I.

Theorem I (see [[19], Theorem 4.1]). Assume that in Algorithm I is bounded. If μ ∈ (0, 2β/L2) is used and if and satisfying (i) limn→∞ αn = 0, (ii) , (iii) , (iv) , and (v) λn αn n ≥ 0 are used, then the sequence, , generated by Algorithm I satisfies the following properties.

(a) is bounded;

(b) limn→∞ xn - yn∥ = 0 and limn→∞ xn - Txn∥ = 0 hold;

(c) If ∥xn - yn∥ = on), converges strongly to the unique solution of Problem I.

Motivated and inspired by the above research work, we continue the convergence study of Iiduka's relaxed hybrid steepest-descent Algorithm I. It is proven that under the lack of the boundedness assumption of converges strongly to the unique solution of Problem I.

On the other hand, we introduce the following monotone variational inequality with the variational inequality constraint over the intersection of the fixed point sets of N nonexpan-sive mappings Ti : H H, with N ≥ 1 an integer.

Problem II. Assume that

(i) each Ti : H H is a nonexpansive mapping with ;

(ii) A1 : H H is α-inverse strongly monotone;

(iii) A2: H H is β-strongly monotone and L-Lipschitz continuous;

(iv) .

Then the objective is to

Another algorithm is proposed for Problem II.

Algorithm II. Let Ti : H H (i = 1,2,..., N) and Ai : H H (i = 1,2) satisfy Assumptions (i)-(iv) in Problem II. The following steps are presented for solving Problem II.

Step 0. Take , choose x0 H arbitrarily, and let n := 0.

Step 1. Given xn H, compute xn+1 H as

Update n := n + 1 and go to Step 1.

(A1) limn→∞ αn = 0;

(A2) ;

(A3) limn→∞ (αn - αn+1)/αn+1 = 0 or ;

(A4) limn→∞ (λn - λn+1)/λn+1 = 0 or ;

(A5) λn αn for all n ≥ 0.

It is proven that under Conditions (A1)-(A5), the sequence generated by Algorithm I converges strongly to the unique solution of Problem I.

Second, assume that there hold the following conditions:

(B1) limn→∞ αn = 0;

(B2) ;

(B3) limn→∞ (αn - αn+N)/αn+N = 0 or ;

(B4) limn→∞ (λn - λn+N)/λn+N = 0 or ;

(B5) λn αn for all n ≥ 0.

It is proven that under Conditions (B1)-(B5), the sequence generated by Algorithm II converges strongly to the unique solution of Problem II. It is worth pointing out that in our results there is no assumption of the boundedness imposed on the sequences {xn} and {yn} generated by Algorithms I or II.

In addition, if N = 1, then Algorithm II reduces to the above Algorithm I. Hence, Algorithm II is more general and more flexible than Algorithm I. Obviously, our problem of finding the unique element of is more general and more subtle than the problem of finding the unique element of VI(VI(Fix(T), A1),A2). Beyond question, our results represent the modification, supplement, extension, and development of the above Theorem I.

The rest of the article is organized as follows. After some preliminaries in Section 2, we introduce two relaxed hybrid steepest-descent algorithms for solving Problems I and II in Section 3, respectively. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are given in the last section, Section 4.

2 Preliminaries

Let H be a real Hilbert space with an inner product 〈·,·〉 and its induced norm ∥ · ∥. Throughout this article, we write xn x to indicate that the sequence {xn} converges weakly to x. xn x implies that {xn} converges strongly to x. A function f : H R is said to be convex iff, for any x, y H and for any λ ∈ [0,1], f(λx + (1 - λ)y) ≤ λf(x) + (1 - λ)f(y). It is said to be strongly convex iff, α > 0 exists such that, for all x,y H and for all .

A : H H is referred to as a strongly monotone operator with α > 0 [[20], Definition 25.2(iii)] iff 〈Ax - Ay, x - y〉 ≥ αx - y2 for all x, y H. It is said to be inverse-strongly monotone with α > 0 (α-inverse-strongly monotone) [[17], Definition, p. 200] (see [[21], Definition 2.3.9(e)] for the definition of this operator, called a co-coercive operator, on the finite dimensional spaces) iff 〈Ax -Ay, x - y〉 ≥ αAx - Ay2 for all x,y H.

A : H H is said to be hemicontinuous [[22], p. 204], [[20], Definition 27.14] iff, for any x,y H, the mapping g : [0,1] → H, defined by g(t) := A(tx + (1 - t)y) (t ∈ [0,1]), is continuous, where H has a weak topology. A : H H is referred to as a Lipschitz continuous (L-Lipschitz continuous) operator [[23], Sect. 1.1], [[20], Definition 27.14] iff L > 0 exists such that ∥Ax - Ay∥ ≤ Lx - y∥ for all x,y H. The fixed point set of the mapping A : H H is denoted by Fix(A) := {x H : Ax = x}.

Let f : H R be a Frechet differentiable function. This means that f is convex (resp. strongly convex) iff ∇f : H H is monotone (resp. strongly monotone) [[20], Proposition 25.10], [[24], Sect. IV, Theorem 4.1.4]. If f : H R is convex and if ∇f : H H is 1/L-Lipschitz continuous, ∇f is L-inverse-strongly monotone [[25], Theorem 5].

The metric projection onto the nonempty, closed and convex set C (⊂ H), denoted by PC, is defined by, for all x H, PC x C and ∥x - PCx∥ = infyC x - y∥.

The variational inequality [1,26] for a monotone operator A : H H over a nonempty, closed, and convex set C (⊂ H), is to find a point in

Some properties of the solution set of the monotone variational inequality are as follows:

Proposition 2.1. Let C (⊂ H) be nonempty, closed and convex, A : H H be monotone and hemicontinuous, and f : H R be convex and Frechet differentiable. Then,

(i) [[22], Lemma 7.1.7] VI(C,A) = {x* ∈ C : 〈Ay, y - x*〉 ≥ 0, ∀y C}.

(ii) [[20], Theorem 25.C] when C is bounded.

(iii) [[27], Lemma 2.24] VI(C, A) = Fix(PC(I - λA)) for all λ > 0, where I stands for the identity mapping on H.

(iv) [[27], Theorem 2.31] VI(C, A) consists of one point, if A is strongly monotone and Lipschitz continuous.

(v) [[26], Chap. II, Proposition 2.1 (2.1) and (2.2)] VI(C, ∇f) = ArgminxCf(x) := {x* ∈ C: f(x*) = minxC f(x)}.

On the other hand, the mapping T : H H is referred to as a nonexpansive mapping [22,23,28-30] iff, ∥Tx - Ty∥ ≤ ∥x - y∥ for all x,y H. The metric projection PC onto a given nonempty, closed, and convex set C (⊂ H), satisfies the nonexpansivity with Fix(PC) = C [[22], Theorem 3.1.4(i)], [[29], p. 371], [[30], Theorem 2.4-3]. The fixed point set of a nonexpansive mapping has the following properties:

Proposition 2.2. Let C (⊂ H) be nonempty, closed, and convex, and T : C C be nonexpansive. Then,

(i) [[23], Proposition 5.3] Fix(T) is closed and convex;

(ii) [[23], Theorem 5.1] when C is bounded.

The following proposition provides an example of a nonexpansive mapping in which the fixed point set is equal to the solution set of the monotone variational inequality.

Proposition 2.3 (see [[19], Proposition 2.3]). Let C (⊂ H) be nonempty, closed, and convex, and A : H H be α-inverse-strongly monotone. Then, for any given λ ∈ [0, 2α], Sλ : H H defined by

satisfies the nonexpansivity and Fix(Sλ) = VI(C,A).

The following proposition is needed to prove the main theorems in this article.

Proposition 2.4 (see [[2], Lemma 3.1]). Let A : H H be β-strongly monotone and L-Lipschitz continuous, let T : H H be a nonexpansive mapping and let μ ∈ (0,2β/L2). For λ ∈ [0,1], define Tλ : H H by Tλx := Tx - λμATx for all x H. Then, for all x, y H,

holds, where .

The following lemmas will be used for the proof of our main results in this article.

Lemma 2.1 (see [31]). Let {an} be a sequence of nonnegative real numbers satisfying the property

where {sn} ⊂ (0,1] and {tn} are such that

(i) ;

(ii) either lim supn→ ∞ tn ≤ 0 or ;

(iii) .

Then limn→ ∞, an = 0.

Lemma 2.2 (see [[23], Demiclosedness Principle]). Assume that T is a nonexpansive self-mapping of a closed convex subset C of a Hilbert space H. If T has a fixed point, then I - T is demiclosed. That is, whenever {xn} is a sequence in C weakly converging to some x C and the sequence {(I - T)xn} strongly converges to some y, it follows that (I - T)x = y. Here I is the identity operator of H.

The following lemma is an immediate consequence of an inner product.

Lemma 2.3. In a real Hilbert space H, there holds the inequality

Lemma 2.4. Let be a bounded sequence of nonnegative real numbers and be a sequence of real numbers such that lim supn→ ∞ bn ≤ 0. Then, lim supn→ ∞ an bn ≤ 0.

Proof. Since is a bounded sequence of nonnegative real numbers, there is a constant a > 0 such that 0 ≤ an a for all n ≥ 0. Note that lim supn→ ∞ bn ≤ 0. Hence, given ε > 0 arbitrarily, there exists an integer n0 ≥ 1 such that bn < ε for all n n0. This implies that

Therefore, we have

From the arbitrariness of ε > 0, it follows that lim supn→ ∞ anbn ≤ 0.

3 Relaxed hybrid steepest-descent algorithms

In this section, T : H H and Ai : H H (i = 1, 2) are assumed to satisfy Assumptions (i)-(iv) in Problem I. First the following algorithm is presented for Problem I.

Algorithm 3.1.

Step 0. Take , choose x0 H arbitrarily, and let n := 0.

Step 1. Given xn H, compute xn+1 H as

Update n := n + 1 and go to Step 1.

The following convergence analysis is presented for Algorithm 3.1:

Theorem 3.1. Let and such that

(i) limn→ ∞ αn = 0;

(ii) ;

(iii) limn→ ∞ (αn - αn+1)/αn+1 = 0 or ;

(iv) limn→ ∞ (λn - λn+1)n+1 = 0 or ;

(v) λn αn for all n ≥ 0.

Then the sequence generated by Algorithm 3.1 satisfies the following properties:

(a) is bounded;

(b) limn→ ∞ xn - yn∥ = 0 and limn→ ∞ xn - Txn∥ = 0 hold;

(c) converges strongly to the unique solution of Problem I provided ∥xn - yn∥ = o(λn).

Proof. Let {x*} = VI(VI(Fix(T), A1), A2). Assumption (iii) in Problem I guarantees that

Putting zn = xn - λn A1xn for all n ≥ 0, we have

We divide the rest of the proof into several steps.

Step 1. {xn} is bounded. Indeed, since A1 is α-inverse strongly monotone and , we have

Utilizing Proposition 2.4 and Condition (v) we have (note that )

(3.1)

where . By induction, it is easy to see that

This implies that is bounded. Assumption (ii) in Problem I guarantees that A1 is 1/α-Lipschitz continuous; that is,

Thus, the boundedness of {xn} ensures the boundedness of {A1xn}. From yn = T(xn - λnA1xn) and the nonexpansivity of T, it follows that is bounded. Since A2 is L-Lipschitz continuous, {A2yn} is also bounded.

Step 2. limn→ ∞ xn - yn∥ = limn→ ∞ xn - Txn∥ = 0. Indeed, utilizing Proposition 2.4, we obtain from the α-inversely strong monotonicity of A1 that

Since both {A1xn} and {A2yn} are bounded, from Lemma 2.1 and Conditions (iii), (iv) it follows that

(3.2)

In the meantime, from ∥xn+1-yn∥ = αnμA2yn∥ and Condition (i), we get limn→ ∞ xn+1-yn∥ = 0. Since ∥xn - yn∥ ≤ ∥xn - xn+1∥ + ∥xn+1 - yn∥,

(3.3)

is obtained from (3.2). Moreover, the nonexpansivity of T guarantees that

Hence, Conditions (i) and (v) lead to limn→ ∞ yn - Txn∥ = 0. Therefore,

(3.4)

is obtained from (3.3).

Step 3. lim supn→ ∞ A1x*,x* - xn〉 ≤ 0. Indeed, choose a subsequence of {xn} such that

The boundedness of implies the existence of a subsequence of and a point such that . We may assume without loss of generality that , that is, .

First, we can readily see that . As a matter of fact, utilizing Lemma 2.2 we deduce immediately from (3.4) and that . From x* ∈ VI(Fix(T), A1), we derive

(3.5)

Step 4. lim supn→∞A2x*,x* - xn〉 ≤ 0. Indeed, choose a subsequence of {xn} such that

The boundedness of implies that there is a subsequence of which converges weakly to a point . Without loss of generality, we may assume that . Utilizing Lemma 2.2 we conclude immediately from (3.4) and that .

Let y ∈ Fix(T) be fixed arbitrarily. Then, in terms of Lemma 2.3, we conclude from the nonexpansivity of T and monotonicity of A1 that for all n ≥ 0,

(3.6)

which implies that for all n ≥ 0,

where M0 := sup{∥xn - y∥ + ∥yn - y∥ + ∥A1xn2 : n ≥ 0} < ∞. From ∥xn - yn∥ = o(λn) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 ≥ 0 such that M0(∥xn - yn∥/λn + λn) ≤ ε for all n m0. Hence, 0 ≤ ε + 2〈A1y,y - xn〉 for all n m0. Putting n := nk, we derive as k → ∞, from . Since ε > 0 is arbitrary, it is clear that for all y ∈ Fix(T). Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that . Therefore, from {x*} = VI(VI(Fix(T), A1), A2), we have

(3.7)

Step 5. limn→∞ xn - x*∥ = 0. Indeed, observe first that for all n ≥ 0,

Utilizing Lemma 2.3 and Proposition 2.4, we deduce from Inequality (3.6) that for all n ≥ 0,

(3.8)

It is easy to see that both and are bounded and nonnegative sequences. Since , λn αn → 0 (n → ∞), lim supn→∞ A1x*,x* - xn) ≤ 0 and lim supn→∞A2x*,x* - xn+1〉 ≤ 0, we conclude that and

(according to Lemma 2.4.) Therefore, utilizing Lemma 2.1 we have

This completes the proof.

On the other hand, Ti : H H (i = 1,2,... ,N) and Ai : H H (i = 1,2) are assumed to satisfy Assumptions (i)-(iv) in Problem II. Then the following algorithm is presented for Problem II.

Algorithm 3.2.

Step 0. Take , choose x0 H arbitrarily, and let n := 0.

Step 1. Given xn H, compute xn+1 H as

Update n := n + 1 and go to Step 1.

The following convergence analysis is presented for Algorithm 3.2:

Theorem 3.2. Let μ ∈ (0, 2β/L2), , and such that

(i) limn→∞ αn = 0;

(ii) ;

(iii) limn→∞(αn - αn+N)/αn+N = 0 or ;

(iv) limn→∞(λn - λn+N)/λn+N = 0 or ;

(v) λn αn for all n ≥ 0.

(3.9)

Then the sequence generated by Algorithm 3.2 satisfies the following properties:

(a) is bounded;

(b) limn→∞xn+N -xn∥ = 0 and limn→∞xn - T[n+N] ... T[n+1]xn∥ = 0 hold;

(c) converges strongly to the unique solution of Problem II provided ∥xn - yn∥ = o(λn).

Proof. Let . Assumption (iii) in Problem II guarantees that

Putting zn = xn- λnA1xn for all n ≥ 0, we have

We divide the rest of the proof into several steps.

Step 1. {xn} is bounded. Indeed, since A1 is α-inverse strongly monotone and , we have

Utilizing Proposition 2.4 and Condition (v) we have (note that , for all n ≥ 0)

where . From this, we get by induction

Hence is bounded. Assumption (ii) in Problem II guarantees that A1 is 1/α-Lipschitz continuous; that is,

Thus, the boundedness of {xn} ensures the boundedness of {A1xn}. From yn = T[n+1] (xn-λnA1xn) and the nonexpansivity of T[n+1], it follows that is bounded. Since A2 is L-Lipschitz continuous, {A2yn} is also bounded.

Step 2. limn→∞ xn+N - xn∥ = limn→∞ xn - T[n+N],... ,T[n+1]xn∥ = 0. Indeed, from the nonexpansivity of each Ti (i = 1, 2,..., N), Proposition 2.3, and the condition λn ≤ 2α (∀n ≥ 0) we conclude that for all n ≥ 0,

where M1 := sup{∥A1xn∥ : n ≥ 0} < ∞. From Proposition 2.4, it is found that

where M2 := sup{∥A2yn∥ : n ≥ 0} < ∞. Utilizing Lemma 2.1 we deduce from Conditions

(iii), (iv) that

(3.10)

From ∥xn+1-yn∥ = μαnA2yn∥ ≤ μM2αn and Condition (i), we get limn→∞ xn+1-yn∥ = 0. Now we observe that the following relation holds:

(3.11)

Since ∥xn+1 - yn∥ → 0 and λn → 0 as n → ∞, from the nonexpansivity of each Ti (i = 1,2,..., N) and boundedness of {A1xn} it follows that as n → ∞ we have

Hence from (3.10) and (3.11) it follows that

Note that

That is,

(3.12)

Step 3. lim supn→∞A1x*,x* - xn〉 ≤ 0. Indeed, choose a subsequence of {xn} such that

The boundedness of implies the existence of a subsequence of and a point such that . We may assume without loss of generality that , that is, .

First, we can readily see that . As a matter of fact, since the pool of mappings {Ti : 1 ≤ i N} is finite, we may further assume (passing to a further subsequence if necessary) that, for some integer k ∈ {1,2,... ,N},

Then, it follows from (3.12) that

Hence, by Lemma 2.2, we conclude that

Together with Assumption (3.9) this implies that . Now, since

we obtain

(3.13)

Step 4. lim supn→∞A2x*,x* - xn〉 ≤ 0. Indeed, choose a subsequence of {xn} such that

The boundedness of implies that there is a subsequence of which converges weakly to a point . Without loss of generality, we may assume that . Repeating the same argument as in the proof of , we have .

Let be fixed arbitrarily Then, it follows from the nonexpansivity of each Ti (i = 1, 2,..., N) and monotonicity of A1 that for all n ≥ 0,

(3.14)

which implies that for all n ≥ 0,

where M3 := sup{∥xn-y∥ + ∥yn-y∥ : n ≥ 0} < ∞. From ∥xn - yn∥ = o(λn) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 > 0 such that for all n m0. Hence, 0 ≤ ε + 2〈A1y, y - xn〉 for all n m0. Putting n := nk, we derive as k → ∞, from . Since ε > 0 is arbitrary, it is clear that for all . Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that . Therefore, from , we have

(3.15)

Step 5. limn→∞ xn - x*∥ = 0. Indeed, repeating the same argument as in Step 5 of the proof of Theorem 3.1, from (3.14) we can derive

This completes the proof.

Remark 3.1. If we set N = 1 in Theorem 3.2, then the limit limn→∞ xn+N - xn∥ = 0 reduces to the one limn→∞ xn+1 - xn∥ = 0. In this case, we have

that is, limn→∞xn-yn∥ = 0.

Remark 3.2. Recall that a self-mapping T of a nonempty closed convex subset K of a real Hilbert space H is called attracting nonexpansive [32,33] if T is nonexpansive and if, for x, p K with x ∉ Fix(T) and p ∈ Fix(T),

Recall also that T is firmly nonexpansive [32,33] if

It is known that Assumption (3.9) in Theorem 3.2 is automatically satisfied if each Ti is attracting nonexpansive. Since a projection is firmly nonexpansive, we have the following consequence of Theorem 3.2.

Corollary 3.1. Let , and such that

(i) limn→∞ αn = 0;

(ii) ;

(iii) limn→∞(αn - αn+N)/αn+N = 0 or ;

(iv) limn→∞(λn - λn+N)/λn+N = 0 or ;

(v) λn αn for all n ≥ 0.

Take x0 H arbitrarily and let the sequence be generated by the iterative algorithm

where

and A1 is the same as in Problem I. Then the sequence satisfies the following properties:

(a) is bounded;

(b) limn→∞ xn+N -xn∥ = 0 and limn→∞ xn - P[n+N] ... P[n+1]xn∥ = 0 hold;

(c) converges strongly to the unique element of provided ∥xn-yn∥ = o(λn).

Proof. In Theorem 3.2, putting Ti = Pi (i = 1, 2,..., N), we have

It is easy to see that Assumption (3.9) is automatically satisfied and that

Therefore, in terms of Theorem 3.2 we obtain the desired result.

4 Applications to constrained pseudoinverse

Let K be a nonempty closed convex subset of a real Hilbert space H. Let A be a bounded linear operator on H. Given an element b H, consider the minimization problem

(3.16)

Let Sb denotes the solution set. Then, Sb is closed and convex. It is known that Sb is nonempty if and only if

In this case, Sb has a unique element with minimum norm; that is, there exists a unique point xSb satisfying

(3.17)

Definition 4.1 (see [34]). The K-constrained pseudoinverse of A (symbol ) is defined as

where xSb is the unique solution to (3.17).

We introduce now the K-constrained generalized pseudoinverse of A (see [2]).

Let θ : H R be a differentiable convex function such that θ' is a L-Lipschitz continuous and β-strongly monotone operator for some L > 0 and β > 0. Under these assumptions, there exists a unique point for such that

(3.18)

Definition 4.2. The K-constrained generalized pseudoinverse of A associated with θ (symbol ) is defined as

where is the unique solution to (3.18). Note that, if

then the K-constrained generalized pseudoinverse of A associated with θ reduces to the K-constrained pseudoinverse of A in Definition 4.1.

Now we apply the results in Section 3 to construct the K-constrained generalized pseudoinverse of A. But first, observe that solves the minimization problem (3.16) if and only if there holds the following optimality condition:

where A* is the adjoint of A. This is equivalent to, for each λ > 0,

or

(3.19)

Define a mapping T : H H by

(3.20)

Lemma 4.1 (see [[18], Lemma 4.1]). If λ ∈ (0, 2∥A-2) and if , then T is attracting nonexpansive and Fix(T) = Sb.

Theorem 4.1. Let μ ∈ (0, 2β/L2), , and such that

(i) limn→∞ αn = 0;

(ii) ;

(iii) limn→∞(αn - αn+1)/αn+1 = 0 or ;

(iv) limn→∞(λn - λn+1)n+1 = 0 or ;

(v) λn αn for all n ≥ 0.

Take x0 H arbitrarily and let be the sequence generated by the algorithm

(3.21)

where T is given in (3.20) and A1 is the same as in Problem I. Then the sequence satisfies the following properties:

(a) is bounded;

(b) limn→∞ xn - yn∥ = 0 and limn→∞ xn - Txn∥ = 0 hold;

(c) converges strongly to the unique element of VI(VI(Sb, A1), θ') provided ∥xn- yn∥ = o(λn).

Proof. In Theorem 3.1, put A2 := θ'. Since Fix(T) = Sb and θ' is L-Lipschitz continuous and β-strongly monotone, utilizing Theorem 3.1 we obtain the desired result.

Corollary 4.1 (see [[18], Theorem 4.1]). Let μ ∈ (0,2β/L2) and such that

(i) limn→∞ αn = 0;

(ii) ;

(ii) limn→∞(αn - αn+1)/αn+1 = 0 or .

Take x0 H arbitrarily and let be the sequence generated by the algorithm

where T is given in (3.20). Then the sequence satisfies the following properties:

(a) is bounded;

(b) limn→∞ xn - Txn∥ = 0 holds;

(c) converges strongly to .

Proof. Note that the minimization problem (3.18) is equivalent to the following variational inequality problem

(3.22)

where Sb = Fix(T) and θ' is L-Lipschitz continuous and β-strongly monotone. In Theorem 4.1, put A1 = 0. Then we have

Take a number α ∈ (0,∞) arbitrarily. Then A1 is α-inverse strongly monotone. Now, choose a sequence such that Conditions (iv), (v) in Theorem 4.1 hold, that is,

(iv) limn→∞(λn - λn+1)/λn+1 = 0 or ;

(v) λn αn for all n ≥ 0.

In this case, Algorithm (3.21) reduces to the following

which is equivalent to

Therefore, all conditions in Theorem 4.1 are satisfied. Consequently, utilizing Theorem 4.1 we derive the desired result.

Lemma 4.2 (see [32,33]). Assume that N is a positive integer and assume that are N attracting nonexpansive mappings on H having a common fixed point. Then,

Now, assume that is a family of N closed convex subsets of K such that

(3.23)

For each 1 ≤ i N, we define Ti : H H by

where is the projection from H onto .

Theorem 4.2. Let μ ∈ (0, 2β/L2), , and such that

(i) limn→∞ αn = 0;

(ii) ;

(iii) limn→∞(αn - αn+N)n+N = 0 or ;

(iv) limn→∞(λn - λn+N)/λn+N = 0 or ;

(v) λn αn for all n ≥ 0.

Take x0 H arbitrarily and let be the sequence generated by the algorithm

(3.24)

where each Ti (1 ≤ i N) is given as above and A1 is the same as in Problem II. Then the sequence satisfies the following properties:

(a) is bounded;

(b) limn→∞ xn+N -xn∥ = 0 and limn→∞ xn - T[n+N] ... T[n+1]xn∥ = 0 hold;

(c) converges strongly to the unique element of VI(VI(Sb, A1), θ') provided ∥xn - yn∥ = o(λn).

Proof. We observe first that

(3.25)

Indeed,

Conversely, if , then for all x K, we have

(3.26)

Since each is a subset of K, (3.26) holds over . This implies that

and hence

By Lemmas 4.1 and 4.2, we see that Assumption (3.9) in Theorem 3.2 holds. In Theorem 3.2, put A2 := θ'. Since θ' is L-Lipschitz continuous and β-strongly monotone, utilizing Theorem 3.2 we obtain the desired result.

Corollary 4.2 (see [[18], Theorem 4.2]). Let μ ∈ (0,2β/L2) and such that

(i) limn →∞ αn = 0;

(ii) ;

(iii) limn→∞(αn - αn+N)/αn+N = 0 or .

Take x0 H arbitrarily and let be the sequence generated by the algorithm

where each Ti (1 ≤ i N) is given as above. Then the sequence satisfies the following properties:

(a) is bounded;

(b) limn→∞ xn+N -xn∥ = 0 and limn→∞ xn - T[n+N] ... T[n+1]xn∥ = 0 hold;

(c) converges strongly to the unique solution of (3.18).

Proof. Note that the minimization problem (3.18) is equivalent to the following variational inequality problem

where Sb = Fix(T) and θ' is L-Lipschitz continuous and β-strongly monotone. In Theorem 4.2, put A1 = 0. Then we have

Take a number α ∈ (0,∞) arbitrarily. Then A1 is α-inverse strongly monotone. Now, choose a sequence such that Conditions (iv), (v) in Theorem 4.2 hold, that is,

(iv) limn→∞(λn - λn+N)/λn+N = 0 or ;

(iv) λn αn for all n ≥ 0.

In this case, Algorithm (3.24) reduces to the following

which is equivalent to

Therefore, all conditions in Theorem 4.2 are satisfied. Consequently, utilizing Theorem 4.2 we derive the desired result.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The authors declare that their contributions are very much alike.

Acknowledgements

The authors are grateful to the referees for their careful reading and noting several misprints, and their helpful and useful comments.

The research of the first author was partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707), the research of the second author was partially supported by a grant from NSC 100-2115-M-033-001-, and the research of the third author was partially supported by the grant NSC 99-2221-E-037-007-MY3.

References

1. Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Classics Appl Math, SIAM, Philadelphia (2000)

2. Yamada, I: The hybrid steepest-descent method for the variational inequality problem over the intersection of fixed-point sets of nonexpansive mappings. In: Butnariu, D, Censor, Y, Reich, S (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 473–504. Kluwer Academic Publishers, Dordrecht, Netherlands (2001)

3. Combettes, PL: A block-iterative surrogate constraint splitting method for quadratic signal recovery. IEEE Trans Signal Process. 51(7), 1771–1782 (2003). Publisher Full Text

4. Iiduka, H, Yamada, I: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J Optim. 19, 1881–1893 (2009). Publisher Full Text

5. Slavakis, K, Yamada, I: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans Signal Process. 55, 4511–4522 (2007)

6. Iiduka, H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math Program (2010)

7. Mainge, PE, Moudafi, A: Strong convergence of an iterative method for hierarchical fixed-point problems. Pac J Optim. 3, 529–538 (2007)

8. Moudafi, A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 23, 1635–1640 (2007). Publisher Full Text

9. Cabot, A: Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization. SIAM J Optim. 15, 555–572 (2005). Publisher Full Text

10. Luo, ZQ, Pang, JS, Ralph, D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, New York (1996)

11. Izmaelov, AF, Solodov, MV: An active set Newton method for mathematical program with complementarity constraints. SIAM J Optim. 19, 1003–1027 (2008). Publisher Full Text

12. Hirstoaga, SA: Iterative selection methods for common fixed point problems. J Math Anal Appl. 324, 1020–1035 (2006). Publisher Full Text

13. Iiduka, H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 71, 1292–1297 (2009). Publisher Full Text

14. Iiduka, H: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization. 59, 873–885 (2010). Publisher Full Text

15. Zeng, LC, Wong, NC, Yao, JC: Convergence analysis of modified hybrid steepest-descent methods with variable parameters for variational inequalities. J Optim Theory Appl. 132, 51–69 (2007). Publisher Full Text

16. Ceng, LC, Ansari, QH, Yao, JC: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J Optim Theory Appl. 151, 489–512 (2011). Publisher Full Text

17. Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J Math Anal Appl. 20, 197–228 (1967). Publisher Full Text

18. Xu, HK, Kim, TH: Convergence of hybrid steepest-descent methods for variational inequalities. J Optim Theory Appl. 119, 185–201 (2003)

19. Iiduka, H: Iterative algorithm for solving Triple-hierarchical constrained optimization problem. J Optim Theory Appl. 148, 580–592 (2011). Publisher Full Text

20. Zeidler, E: Nonlinear Functional Analysis and Its Applications II/B: Nonlinear Monotone Operators. Springer, New York (1985)

21. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems I. Springer, New York (2003)

22. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

23. Goebel, K, Kirk, WA: Topics on Metric Fixed-Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)

24. Hiriart-Urruty, JB, Lemarechal, C: Convex Analysis and Minimization Algorithms I. Springer, New York (1993)

25. Baillon, JB, Haddad, G: Quelques proprietes des operateurs angle-bornes et n-cycliquement monotones. Isr J Math. 26, 137–150 (1977). Publisher Full Text

26. Ekeland, I, Temam, R: Convex Analysis and Variational problems. Classics Appl Math, SIAM, Philadelphia (1999)

27. Vasin, VV, Ageev, AL: Ill-Posed Problems with A Priori Information, V.S.P. Intl Science, Utrecht (1995)

28. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

29. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996). Publisher Full Text

30. Stark, H, Yang, Y: Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. Wiley, New York (1998)

31. Xu, HK: Iterative algorithms for nonlinear operators. J Lond Math Soc. 66, 240–256 (2002). Publisher Full Text

32. Bauschke, HH: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert spaces. J Math Anal Appl. 202, 150–159 (1996). Publisher Full Text

33. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996). Publisher Full Text

34. Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems. Kluwer, Dordrecht, Holland (2000)