• No se han encontrado resultados

On the optimization problems for the proper generalized decomposition and the n best term approximation / A Falcó

N/A
N/A
Protected

Academic year: 2020

Share "On the optimization problems for the proper generalized decomposition and the n best term approximation / A Falcó"

Copied!
18
0
0

Texto completo

(1)Abstract The Proper Generalized Decomposition or, in short, PGD is a technique that reduces calculation and storage cost drastically and presents some similarities with the Proper Orthogonal Decomposition, in short POD. It was initially introduced for the analyze and reduction of statistical and experimental data, the a posteriori decomposition techniques, also known as Karhunen-Loeve Expansion, Singular Value decomposition or Principal Component Analysis, are now used in the context of model reduction. Its are also related with the so-called n-best term approximation problem. In this paper we study and analyze the different mathematical and computational problems appearing in the optimization procedures related with the Proper Generalized Decomposition and its relative n-best term approximation problem. Keywords: Proper Generalized Decomposition, Tensor Product Hilbert Space, Best Approximation.. 1 Introduction The main goal of this paper is to use of a separated representation of the solution of a class of elliptic problems, which allows to define a tensor product approximation basis as well as to decouple the numerical integration of a high dimensional model in each dimension. The milestone of this methodology is the use of shape functions given by a tensorial based construction. This fact has advantages as the manipulation of only one dimensional polynomials and its derivatives, that provides a better computational performance and simplified implementation and use one-dimensional integration rules. Moreover, it makes possible the solution of models defined in spaces of more than hundred dimensions in some specific applications. This problem is closely related with the decomposition of a tensor as a sum of rank-one tensors, that it can be 1.

(2) considered as a higher order extension of the matrix Singular Value Decomposition. It is well-known, from the Lax-Milgram Lemma, that if V is a Hilbert space, A(·, ·) is a bounded, V −elliptic bilinear form on V, and ℓ ∈ V ′ . Then there is a unique solution of the problem Find u ∈ V such that A(u, v) = ℓ(v) for all v ∈ V.. (1). A generalized paradigm is that if V = V1 ⊗ . . . ⊗ Vd then the intensive use of tensor techniques can help to the computer scientist to ”avoid the curse of dimensionality”. The Proper Generalized Decomposition (PGD) method has been recently proposed [1, 17, 21] for the a priori construction of separated representations of an element u in a tensor product space V = V1 ⊗ . . . ⊗ Vd , which is the solution of a problem of type (1) with a symmetric bilinear form. A rank-n approximated separated representation un of u is defined by un =. n X. vi1 ⊗ . . . ⊗ vid. (2). i=1. The concept of separated representation was introduced by Beylkin and Mohlenkamp in [4] and it is related with the problem of constructing the approximate solutions of some classes of problems in high-dimensional spaces by means a separable function. In particular, for a given map u : [0, 1]d ⊂ Rd −→ R, we say that it has a separable representation if ∞ X. u(x1 , . . . , xd ) =. (j). (j). u1 (x1 ) · · · ud (xd ). (3). j=1. Now, consider a mesh of [0, 1] in the xk -variable given by Nk -mesh points, 1 ≤ k ≤ d, then we can write a discrete version of (3) by u(xi1 , . . . , xid ) =. ∞ X. (j). (j). u1 (xi1 ) · · · ud (xid ),. (4). j=1. where 1 ≤ ik ≤ Nk for 1 ≤ k ≤ d. Observe that for each 1 ≤ k ≤ d, if xjk ∈ RNk (j) denotes the vector with components uk (xik ) for 1 ≤ ik ≤ Nk , then (4) it is equivalent to ∞ X (5) u= xj1 ⊗ · · · ⊗ xjd . j=1. We point out that (5) is an useful expression to implemented numerical algorithms using the M ATLAB and O CTAVE function kron. 2.

(3) This paper is organized as follows. In the next section we introduce the tensor product Hilbert spaces. In Section we give the definition of progressive representation in tensor product Hilbert spaces and introduce the existence theorem for the progressive separated representation in Tensor Product Hilbert Spaces using a class of energy functionals. Next, in 4 we propose two algorithms in order to construct the PGD approach for general elliptic problems. Finally, in Section 5 some numerical examples are given.. 2 Tensor product sums on tensor product Hilbert spaces N Let V = di=1 Vi be a tensor product Hilbert space where Vi , for i = 1, 2, . . . , d, are separable Hilbert spaces. We denote by (·, ·) and k · k a general inner product on V and its associated norm. We introduce norms k · ki and associated inner products (·, ·)i on Vi , for i = 1, 2, . . . , d. These norms and inner products define a particular norm on V , denoted k · kV , defined by k. ⊗di=1. vi kV =. d Y. kvi ki ,. i=1. for all (v1 , v2 , . . . , vd ) ∈ V, where V is the product space V1 ×· · ·×Vd . The associated inner product (·, ·)V is defined by ⊗di=1 ui , ⊗di=1 vi. . V. =. d Y (ui , vi )i , i=1. Recall that V , endowed with inner product (·, ·)V , is in fact constructed by taking the completion under this inner product. Now, we introduce the set of V of vectors that can be written as a sum of tensor rank 1 elements. For each n ∈ N, we define the set of rank-n tensors Sn = {u ∈ V : rank⊗ u ≤ n}, introduced in [11] in the following way. Given u ∈ V we say that u ∈ S1 if u = u1 ⊗ u2 ⊗ · · · ⊗ ud , where ui ∈ Vi , for i = 1, . . . , d. For n ≥ 2 we define inductively Sn = Sn−1 + S1 , that is, ( ) k X (i) (i) Sn = u ∈ V : u = u , u ∈ S1 for 1 ≤ i ≤ k ≤ n . i=1. Note that Sn ⊂ Sn+1 for all n ≥ 1. We will say for u ∈ V that rank⊗ u = n if and only if u ∈ Sn \ Sn−1 . We first consider the following important property of the set S1 and inner product k · kV . 3.

(4) Lemma 1 S1 is weakly closed in (V, k · kV ). Since equivalent norms induce the same weak topology on V , we have the following corollary. Corollary 2 If the norm k · k on V is equivalent to the norm k · kV , then S1 is weakly closed in (V, k · k). Corollary 3 If the Vi are finite-dimensional vectors spaces, then S1 is weakly closed in (V, k · k) whatever the norm k · k.. 3 An existence theorem for the progressive separated representation in Tensor Product Hilbert Spaces using a class of energy functionals Now we want to construct a class of energy functional on S1 , with respect to a given inner product (·, ·) on V , with associated norm k · k. The results of this section are due to Falcó and Nouy [10]. We make the following assumption on the inner product. Assumption 4 We consider that the inner product (·, ·), with associated norm k · k, is such that S1 is weakly closed in (V, k · k). Let us recall that by Corollary 2, the particular norm k · kV verifies Assumption 4. Now, we introduce for the norm k · k and for each r ∈ V the functional Er : V −→ R by 1 1 1 Er (v) = kvk2 − (r, v) = kr − vk2 − krk2 . 2 2 2 The following result gives the main properties of the energy functional Er . Theorem 5 For each r ∈ V there exists v ∗ ∈ S1 such that Ez (v ∗ ) = min Ez (v).. (6). 1 Ez (v ∗ ) = − kv ∗ k2 , 2. (7). kz − v ∗ k2 = kzk2 − kv ∗ k2 ,. (8). (r − v ∗ , v ∗ ) = 0.. (9). v∈S1. Moreover,. and. 4.

(5) Definition 6 (Progressive separated representation of u ∈ V ) For a given z ∈ V, take z0 = 0 and for n ≥ 1, proceed as follows: rn = z − zn−1 zn = zn−1 + vn , where vn ∈ argminv∈S1 Ern (v).. (10) (11). zn is called an optimal rank-n progressive separated representation of z with respect to the norm k · k. We introduce the following definition of the progressive rank. Note that in general, the progressive rank of an element z ∈ V is different from the optimal rank rank⊗ (z). Definition 7 (Progressive rank) We define the progressive rank of an element u ∈ V , denoted by rankσ (z), as follows: rankσ (z) = min{n : zn = 0}. (12). where zn is the progressive separated representation of z, defined in definition 6, where by convention min(∅) = ∞. Theorem 8 (Existence of the Progressive Separate Representation) For z ∈ V , the P sequence {zn = ni=1 vi }n>0 defined in definition 6 verifies: rankσ (z). z = lim zn = zrankσ (z) = n→∞. X. vi .. i=1. Moreover, for each 0 ≤ n ≤ rankσ (z) − 1 it follows 2. 2. kz − zn k = kzk −. n X. rankσ (z) 2. kvi k =. i=1. X. kvi k2. (13). i=n+1. and n−1 krn k Y sin θi , = kzk j=1. where θi is the angle between ri and vi , that is, θi = arccos. 5. (ri , vi ) . kri kkvi k. (14).

(6) 4 A variational formulation of the Proper Generalized Decomposition 4.1 Formulation of the problem We consider the following variational problem, defined on the a tensor product Hilbert space (V, k · kV ): u ∈ V,. A(u, v) = ℓ(v) ∀v ∈ V. (15). where A(·, ·) : V × V −→ R is a continuous, V −elliptic bilinear form, i.e. such that for all u, v ∈ V, |A(u, v)| ≤ MkukV kvkV , A(v, v) ≥ αkvk2V. (16) (17). for constants M > 0 and α > 0. Now, we introduce the operator A : V −→ V associated with A, and defined by A(u, v) = (Au, v)V. (18). for all u, v ∈ V. We also introduce the element l ∈ V associated with L and defined by ℓ(v) = (l, v)V (19) for all v ∈ V.. The existence of A and l is ensured by the Riesz representation theorem. Problem (15) can be rewritten in an operator form: Au = l. (20). 4.2 The Proper Generalized Decomposition a continuous, V −elliptic bilinear symmetric form Assume that for all u, v ∈ V, A(u, v) = A(v, u).. (21). From the assumptions on the bilinear form A(·, ·), we know that A is bounded, selfadjoint, and positive definite. As usual, we will denote by (·, ·)A the inner product induced by the operator A, where for all u, v ∈ V (u, v)A = (Au, v)V = (u, Av)V , 1/2. We denote by kukA = (u, u)A the associated norm. Note that if A = I the identity operator, then k · kA = k · kV . 6.

(7) From properties of operator A, the norm k · kA is equivalent to k · kV . Therefore, by Corollary 2, the set S1 is weakly closed in (V, k · kA ) and then, k · kA verifies assumption 4. Then we consider in this case for each r ∈ V the map 1 1 EA−1 r (v) = kvkA − (A−1 r, v)A = kvkA − (r, v)V . 2 2 Definition 9 (PGD for self-adjoint operators) Let z0 = 0 and for n ≥ 1, rn = l − Azn−1 zn = zn−1 + vn , where vn ∈ argminv∈S1 EA−1 rn (v).. (22) (23). From Theorem 8 we obtain that lim kA−1 rn kA = 0,. n→∞. that is limn→∞ zn = A−1 l in the k · kA -norm. Since k · kA is equivalent to k · kV , then the sequence {zn }n≥0 also converges to A−1 l in the k · kV -norm. Observe, that the convergence rate given in (14) is k · kA -norm dependent, more precisely, n−1 kA−1 rn kA Y = sin θi , kA−1 lkA j=1. where θi = arccos. (A−1 ri , zi )A . kA−1 ri kA kzi kA. A natural question arises in this context: How we compute a minimum of EA−1 r over S1 for a given r ∈ V ? Note that if v=. d O. vi ∈ argminz∈S1 EA−1 r (z). i=1. then d EA−1 r dt. d O. (vi + twi ). i=1. !. =0. (24). t=0. holds for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . Equation (24) is equivalent to show that the following Euler-Lagrange Equation: d X i=1. r−A. d O i=1. vi. !. , v1 ⊗ · · · ⊗ vi−1 ⊗ wi ⊗ vi+1 ⊗ · · · ⊗ vi. holds for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . 7. !. =0 V. (25).

(8) 4.2.1 A special case Pnr P A Nd d k k Now, assume that A = nk=1 k=1 ⊗i=1 ri are given also in ranki=1 Ai and r = one form. Then the Euler-Lagrange equation appears as d d nr X X Y i=1. nA d X Y. (rjk , vj )j rik −. (Asj vj , vj )j Asi vi , wi. s=1 j=1,j6=i. k=1 j=1,j6=i. !. =0. (26). i. for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . Then nr d X Y. (rjk , vj )j rik −. nA d Y X. (Asj vj , vj )j Asi vi , wi. s=1 j=1,j6=i. k=1 j=1,j6=i. !. =0. (27). i. for 1 ≤ i ≤ d and for all (w1 , . . . , wd ) ∈ V1 ×· · ·×Vd . Now, consider an n-dimensional subspace span {wi1 , . . . , win } of Vi , for each 1 ≤ i ≤ d, and define the vector vi ∈ Rn by means n X α (vi )α = υi where vi = υiα wiα. α=1. On the other hand introduce the symmetric matrices Asj ∈ Rn×n and the vectors rkj ∈ Rn by (Asj )α,β = (Asj wjβ , wjα )j and (rkj )α = (rjk , wjα )j . Note that we can write (rk1 ⊗ · · · ⊗ rkd )α1 ,...,αd =. d Y  α (rjk , wj j )j = ⊗dj=1 rjk , ⊗dj=1wjαi V , j=1. for 1 ≤ k ≤ nr . d Under the above notation the PGD run as follows. Start with u = 0 ∈ Rn and P nl d k rjk = ljk , here we assume that l = k=1 ⊗i=1 li , thus nr = nl . Then we compute {v1 , . . . , vd } ⊂ Rn as follows. Note, that (27) can be written as d nr X Y. hrkj , vj i2 k=1 j=1,j6=i. rki. −. nA d X Y. hvj , Asj vj i2 Asi vi = 0. (28). s=1 j=1,j6=i. for 1 ≤ i ≤ d, here h·, ·i2 denotes the usual inner product in R2 . The strategy to solve the above non-linear system can be seen in Algorithm ??. From Algorithm ?? we can update the solution u = u + v1 ⊗ · · · ⊗ vd and the residual by consider nr = nr + nA and rjnr +k = Akj vj for 1 ≤ k ≤ nA and 1 ≤ j ≤ d. 8.

(9) We remark that under this notation we have (Ak1 v1. ⊗···⊗. d Y = (Akj vj )αj. Akd vd )α1 ,...,αd. j=1. =. d Y. α. Akj vj , wj j. j=1. =. . j α. ⊗dj=1 Akj vj , ⊗dj=1wj j. . V. for 1 ≤ k ≤ nA . In consequence at step N ≥ 1 the residual can be written as rN =. nr X. d. ⊗dj=1 rkj ∈ Rn ,. k=1. where nr = nr + NnA . The proposal algorithm is given in Algorithm 1.. 4.3 The Proper Generalized Decomposition a continuous, V −elliptic bilinear form We consider for each r ∈ V the map Er (Av) = (Er ◦ A)(v) 1 kAvkV − (r, Av)V = 2 1 kvkA∗ A − (A∗ r, v)V = 2 1 kvkA∗ A − ((A∗ A)−1 A∗ r, v)A∗ A . = 2 Thus since A∗ A is a self-adjoint operator, then the norm k · kA∗ A is equivalent to the k · kV -norm and in consequence S1 is also weakly closed in the k · kA∗ A -norm. Moreover, (Er ◦ A) in the k · kV -norm is equal to E(A∗ A)−1 A∗ r (v) in the k · kA∗ A -norm. Definition 10 (PGD for non self-adjoint operators) Let z0 = 0 and for n ≥ 1, rn = b − Azn−1 zn = zn−1 + vn , where vn ∈ arg min Ern (Av). v∈S1. From Theorem 8 we obtain lim k(A∗ A)−1 A∗ rn kA∗ A = lim kA(A∗ A)−1 A∗ rn kV = lim krn kV = 0.. n→∞. n→∞. n→∞. 9. (29) (30).

(10) Thus, in this case the sequence {zn }n≥0 also converges to the solution A−1 l in the k · kV -norm. Here the convergence rate (14) is given by the expression: n−1 Y k(A∗ A)−1 A∗ rn kA∗ A krn kV = = sin θi , , k(A∗ A)−1 A∗ bkA∗ A kbkV j=1. where ((A∗ A)−1 A∗ ri , vi )A∗ A (ri , Avi )V = arccos θi = arccos ∗ −1 ∗ ∗ ∗ k(A A) A ri kA A kvi kA A kri kV kAvi kV In order to solve the associated minimization problem for a given r ∈ V, we have that if d O vi ∈ argminz∈S1 Er (Az) i=1. then the following Euler-Lagrange equation ! ! d d X O A∗ r − A∗ A vi , v1 ⊗ · · · ⊗ vi−1 ⊗ wi ⊗ vi+1 ⊗ · · · ⊗ vi = 0, i=1. i=1. V. or equivalently d X. r−A. i=1. d O. vi. i=1. !. , A(v1 ⊗ · · · ⊗ vi−1 ⊗ wi ⊗ vi+1 ⊗ · · · ⊗ vi ). !. =0. (31). V. holds for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . 4.3.1 A special case Pnr P A Nd d k k Now, assume that A = nk=1 k=1 ⊗i=1 ri are given also in ranki=1 Ai and r = one form. Then the Euler-Lagrange equation appears as ! nA nA d d X d nr X Y X X Y (Atj vj , Asj vj )j Ati vi , Asi wi = 0 (rjk , Asj vj )j rik − i=1 s=1. t=1 j=1,j6=i. k=1 j=1,j6=i. i. (32). for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . Then nA d nr X Y X s=1. (rjk , Asj vj )j rik −. nA d X Y. (Atj vj , Asj vj )j Ati vi , Asi wi. t=1 j=1,j6=i. k=1 j=1,j6=i. !. =0. (33). i. for 1 ≤ i ≤ d and for all (w1 , . . . , wd ) ∈ V1 × · · · × Vd . In a similar way as in the symmetric case, let be consider an n-dimensional subspace span {wi1, . . . , win } of Vi , for each 1 ≤ i ≤ d, and in this case we define the vector vi ∈ Rn as follows: (vi )α =. υiα. where vi =. n X α=1. 10. υiα wiα..

(11) n×n n In this case, the symmetric matrix Ak,i and the vector rk.i j ∈ R are j ∈ R k,i k β i α k i α (Ak,i j )α,β = (Aj wj , Aj wj )j and (rj )α = (rj , Aj wj )j .. Now, (33) with this notation can be written as follows nA X nr d X Y. hrk,s j , vj i2 s=1 k=1 j=1,j6=i. rk,s i. −. nA d Y X. t,s hvj , At,s j vj i2 Ai vi = 0. (34). t=1 j=1,j6=i. for 1 ≤ i ≤ d. The proposal algorithm is given in Algorithm 2.. 5 A case study: The first passage time and the Poisson equation in (0, 1)d Our first case to study is based on the well-known FeynmanKac representation of the solution to the Dirichlet problem for Poissons equation. Recall that the Dirichlet problem for Poissons equation is  −∆u = f in Ω ⊂ Rd (35) u|∂Ω = 0. P ∂2 where f = f (x1 , x2 , . . . , xd ) is a given function and ∆ = di=1 ∂x 2 is the Laplace i d operator. The solution of this problem at x0 ∈ R , given in the form of the pathintegral with respect to standard d-dimensional Brownian motion Wt is as follows Z τ∂Ω  u(x0 ) = E 2f (Wt )dt (36) 0. Here τ∂Ω = inf{t : Wt ∈ ∂Ω} is the first-passage time and Wτ∂Ω is the first-passage location on the boundary, ∂Ω. We assume that E[τ∂Ω ] < ∞ for all x0 ∈ Ω and f and u are continuous and bounded in Ω, and that the boundary, ∂Ω, is sufficiently smooth so as to ensure the existence of a unique solution, u(x), that has bounded and continuous first- and second-order partial derivatives in any interior subdomain Example 11 Firstly, we consider the following problem in 3D: Solve for (x1 , x2 , x3 ) ∈ Ω = (0, 1)3 : −∆u = (2π)2 · 3 · sin(2πx1 − π) sin(2πx2 − π) sin(2πx3 − π), u|∂Ω = 0, 11. (37) (38).

(12) log10(Relative Error) -14.6. log10(Relative Error). -14.7. -14.8. -14.9. -15. 0. 2000. 4000. 6000. 8000. 10000. 12000. 14000. Nodes in (0,1)3. Figure 1: The relative error ku1 − A−1 fk2 /kA−1 fk2 in logarithmic scale. which has as closed form solution u(x1 , x2 , x3 ) = sin(2πx1 − π) sin(2πx2 − π) sin(2πx3 − π). We used the separable representation Algorithm 1) with parameter values iter max = 5, rank max = 1000 and ε = 0.001. The algorithm give us an approximated solution u1 ∈ S1 . In Figure 1 we represent the relative error of the solution computed using the separable representation algorithm, using logarithmic scale, as a function of the number of nodes used in the discretization of the Poisson equation. All the computations were performed using the GNU software O CTAVE in a AMD 64 Athlon K8 with 2Gib of RAM. In Figure 2 we represent the CPU time, in logarithmic scale, used in solving the standard FEM linear system against the separable representation algorithm. In both cases all the linear systems involved were solved using the standard linear system solver (A\b) of O CTAVE. Example 12 Finally we are addressing some highly multidimensional models. To this end we solve numerically (35) for (x1 , . . . , xd ) ∈ Ω = (0, π)d where f=. d X. −(1 + k) sin(−1+k) (xk ) −k cos2 (xk ) + sin2 (xk ). k=1. 12. . d Y. k ′ =1,k ′ 6=k. ′. sin(1+k ) (xk′ ),.

(13) 2.5. Greedy Rank-One Update Octave A\b. 2. log10(CPU time in seconds). 1.5 1 0.5 0 -0.5 -1 -1.5 -2 0. 2000. 4000. 6000. 8000. 10000. 12000. 14000. Nodes in (0,1)3. Figure 2: The CPU time, in seconds, used in solving the linear system as a function of the number of nodes employed in the discretization of the Poisson Equation. which has as closed form solution u(x1 , . . . , xd ) =. d Y. sin(k+1) (xk ).. k=1. Here we consider the true solution u given by Ui1 ,...,id = u(b xi1 +1 , . . . , x bid +1 ). For d = 10 we use the parameter values iter max = 2, rank max = 10 and ε = 0.001. b ∈ S1 . In a similar way as above the algorithm give us an approximated solution u In Figure 3 we represent the absolute error kb u − uk2 as a function of h = π/N for N = 5, 10, 20, . . . , 160 in log10 -scale. By using similar parameters values the problem has been solved for d = 100 in about 20 minutes.. References [1] A. Ammar, B. Mokdad, F. Chinesta, and R. Keunings. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modelling of complex fluids. Journal of Non-Newtonian Fluid Mechanics, 139(3):153–176, 2006. [2] A. Ammar, B. Mokdad, F. Chinesta, and R. Keunings. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modelling of complex fluids part II: Transient simulation using 13.

(14) −1 −1.5. log10 Error. −2. −2.5. −3. −3.5. −4. −4.5. −5 −2. −1.8. −1.6. −1.4. −1.2. −1. log. 10. h. −0.8. −0.6. −0.4. −0.2. 0. Figure 3: The absolute error kb u − uk2 as a function of h = π/N for N = 5, 10, 20, . . . , 160 in log10 -scale.. [3]. [4] [5]. [6]. [7] [8] [9]. [10]. space-time separated representations. Journal of Non-Newtonian Fluid Mechanics, 144(2–3):98–121, 2007. A. Ammar, F. Chinesta, and A. Falcó. On the convergence of a Greedy RankOne Update Algorithm for a class of Linear Systems. Archives of Computational Methods in Engineering, In press. G. Beylkin and M. J. Mohlenkamp. Algorithms for Numerical Analysis in High Dimensions. SIAM J. Sci. Comput. Vol 26, No. 6 (2005). pp. 2133–2159. J. Chen and Y. Saad. On The Tensor Svd And The Optimal Low Rank Orthogonal Approximation Of Tensors. Siam Journal On Matrix Analysis And Applications, 30(4):1709–1734, 2008. F. Chinesta, A. Ammar and E. Cueto. Recent advances in the use of the proper generalized decomposition for solving multidimensional models. Archives of Computational Methods in Engineering, In press. R. F. Curtain and A. J. Prichard Functional Analysis in Modern Applied Mathematics. Academic Press, 1977. L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., 21(4):1253–1278, 2000. L De Lathauwer, B De Moor, and J Vandewalle. On the best rank-1 and rank(R1,R2,...,R-N) approximation of higher-order tensors. Siam Journal On Matrix Analysis And Applications, 21(4):1324–1342, 2000. A. Falcó and A. Nouy. A Proper Generalized Decomposition for the solution of elliptic problems in abstract form by using a functional Eckart-Young approach. Submmited, 2010. 14.

(15) [11] V. de Silva and L.-H. Lim. Tensor rank and ill-posedness of the best low-rank approximation problem. SIAM Journal of Matrix Analysis & Appl., 30(3):1084– 1127, 2008. [12] A. Doostan and G. Iaccarino. A least-squares approximation of partial differential equations with high-dimensional random inputs. Journal of Computational Physics, 228(12):4332–4345, 2009. [13] D. Dureisseix, P. Ladevèze, and B. A. Schrefler. A computational strategy for multiphysics problems — application to poroelasticity. International Journal for Numerical Methods in Engineering, 56(10):1489–1510, 2003. [14] Carl Eckart and Gale Young. The Approximation Of One Matrix By Another Of Lower Rank. Psychometrika, 1(3):211–218, 1936. [15] T.G. Kolda. Orthogonal tensor decompositions. SIAM J. Matrix Analysis & Applications, 23(1):243–255, 2001. [16] TG Kolda. A counterexample to the possibility of an extension of the EckartYoung low-rank approximation theorem for the orthogonal rank tensor decomposition. Siam Journal On Matrix Analysis And Applications, 24(3):762–767, 2003. [17] P. Ladevèze. Nonlinear Computational Structural Mechanics - New Approaches and Non-Incremental Methods of Calculation. Springer Verlag, 1999. [18] P. Ladevèze and A. Nouy. On a multiscale computational strategy with time and space homogenization for structural mechanics. Computer Methods in Applied Mechanics and Engineering, 192:3061–3087, 2003. [19] P. Ladevèze, J.C. Passieux, and D. Néron. The LATIN multiscale computational method and the Proper Generalized Decomposition. Computer Methods in Applied Mechanics and Engineering, In press. [20] C. Le Bris, T. Lelievre, and Y. Maday. Results and questions on a nonlinear approximation approach for solving high-dimensional partial differential equations. Constructive Approximation, 30(3):621-651, 2009. [21] A. Nouy. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Computer Methods in Applied Mechanics and Engineering, 196(45-48):4521–4537, 2007. [22] A. Nouy. Generalized spectral decomposition method for solving stochastic finite element equations: invariant subspace problem and dedicated algorithms. Computer Methods in Applied Mechanics and Engineering, 197:4718–4736, 2008. [23] A. Nouy. Recent developments in spectral stochastic methods for the numerical solution of stochastic partial differential equations. Archives of Computational Methods in Engineering, 16(3):251–285, 2009. [24] A. Nouy. Proper generalized decompositions and separated representations for the numerical solution of high dimensional stochastic problems. Archives of Computational Methods in Engineering, In press. [25] A. Nouy. A priori model reduction through Proper Generalized Decomposition for solving time-dependent partial differential equations. Computer Methods in Applied Mechanics and Engineering, In press.. 15.

(16) [26] A. Nouy and P. Ladevèze. Multiscale computational strategy with time and space homogenization: a radial-type approximation technique for solving micro problems. International Journal for Multiscale Computational Engineering, 170(2):557–574, 2004. [27] A. Nouy and O.P. Le Maı̂tre. Generalized spectral decomposition method for stochastic non linear problems. Journal of Computational Physics, 228(1):202– 235, 2009.. 16.

(17) Algorithm 1 PGD symmetric case P A Nd P nl k ⊗dj=1 ljk , nk=1 1: procedure PGD SYM( k=1 j=1 Aj , ε, tol, rank max) 2: (Asj )α,β = (Asj wjβ , wjα)j for 1 ≤ j ≤ d and 1 ≤ s ≤ nA . 3: (rkj )α = (ljk , wjα )j for 1 ≤ j ≤ d and 1 ≤ k ≤ nl . 4: u=0 5: for i = 0, 1, 2, . . . , rank max do 6: Initialize vi0 ∈ Rn for i = 1, 2 . . . , d. ⊲ We solve minv∈S1 EA−1 rn (v) by a fixed point strategy 7: distance ← 1. 8: while distance ≥ ε do 9: for k = 1, 2, . . . , d do 10: vk1 ← vk0 i−1 P Q hP Q d d nl nA c c s s hv , A v i A 11: vk0 ← j j 2 j k j=1,j6=k hrj , vj i2 rk j=1,j6=k c=1 s=1 12: end for 13: distance ← max1≤i≤d kvi0 − vi1 k2 14: end while 15: u ← u + v10 ⊗ · · · ⊗ vd0 16: for k = 1, . . . , nA do ⊲ We update the residual 17: rnj l +k = −Akj vj0 for 1 ≤ j ≤ d. 18: end for 19: nl ← nl + nA . we update the residual tensor rank Pnl Qd ⊲ Here k 20: residual(i) = k=1 j=1 krj k2 21: if residual(i) < ε or |residual(i) − residual(i)| < tol then goto 13 22: end if 23: end for 24: return u and residual(rank max) 25: break 26: return u and residual(i) 27: end procedure. 17.

(18) Algorithm 2 PGD non-symmetric case P A Nd P nl k ⊗dj=1 ljk , nk=1 1: procedure PGD( k=1 j=1 Aj , ε, tol, rank max) k β i α 2: (Ak,i j )α,β = (Aj wj , Aj wj )j for 1 ≤ j ≤ d and 1 ≤ k, i ≤ nA . k i α 3: (rk,i j )α = (lj , Aj wj )j for 1 ≤ j ≤ d, 1 ≤ k ≤ nl and 1 ≤ i ≤ nA . 4: u=0 5: for i = 0, 1, 2, . . . , rank max do 6: Initialize vi0 ∈ Rn for i = 1, 2 . . . , d. ⊲ We solve minv∈S1 EA−1 rn (v) by a fixed point strategy 7: distance ← 1. 8: while distance ≥ ε do 9: for k = 1, 2, . . . , d do 10: vk1 ← vk0 11:. vk0 ←. "n n d A X A X Y. t,s hvj , At,s j vj i2 Ak. s=1 t=1 j=1,j6=k. 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27:. #−1. nA X d nr Y X. c,s hrc,s j , vj i2 rk. s=1 c=1 j=1,j6=k. end for distance ← max1≤i≤d kvi0 − vi1 k2 end while u ← u + v10 ⊗ · · · ⊗ vd0 for k = 1, . . . , nA do ⊲ We update the residual nl +k,s k,s 0 rj = −Aj vj for 1 ≤ j ≤ d and 1 ≤ s ≤ nA . end for nl ← nl + nA . we update the residual tensor rank Pnl Qd ⊲ Here k residual(i) == k=1 j=1 krj k2 if residual(i) < ε or |residual(i) − residual(i)| < tol then goto 13 end if end for return u and residual(rank max) break return u and residual(i) end procedure. 18.

(19)

Figure

Figure 1: The relative error ku 1 − A −1 f k 2 /kA −1 f k 2 in logarithmic scale.
Figure 2: The CPU time, in seconds, used in solving the linear system as a function of the number of nodes employed in the discretization of the Poisson Equation.
Figure 3: The absolute error kb u − uk 2 as a function of h = π/N for N = 5, 10, 20,

Referencias

Documento similar

The purpose of the research project presented below is to analyze the financial management of a small municipality in the province of Teruel. The data under study has

In the preparation of this report, the Venice Commission has relied on the comments of its rapporteurs; its recently adopted Report on Respect for Democracy, Human Rights and the Rule

In two dimensions, we have considered the prob- lem of prescribing Gaussian and geodesic curvatures on topological disks via confor- mal transformation of the metric, while we

In order to study the closed surfaces with constant mean curvature in Euclidean space, Alexandrov proved in 1956 that any embedded closed CMC surface in R 3 must be a round sphere..

“ CLIL describes a pedagogic approach in which language and subject area content are learnt in combination, where the language is used as a tool to develop new learning from a

The lifetime signed impact parameter probability dis- tribution was determined as a function of the tau lifetime as follows: impact parameter distributions, for di erent lifetimes,

rous artworks, such as sonorou s poetry; d) instances of nOlHu1islie music, such as sonorous activiti es with soc ial, bu t not necessarily aes thetic, funct ions. Lango

On the other hand at Alalakh Level VII and at Mari, which are the sites in the Semitic world with the closest affinities to Minoan Crete, the language used was Old Babylonian,