The approach there is based on the introduction of suitable Galerkin least-squares terms that arise from the constitutive and equilibrium equations, and from the relation defining the rotation in terms of the displacement. In , besides these Galerkin least- squares terms, a consistency term related with the non-homogeneous Dirichlet boundary condition is added. In the case of pure Dirichlet boundary conditions, the bilinear form of the augmented formulation is bounded and coercive on the whole space and hence, the associated Galerkin scheme is well-posed for any finite element subspace. Thus, it is possible to use as finite element subspaces some non-feasible choices for the usual (non-augmented) dual-mixedformulation. In particular, it is possible to employ Raviart- Thomas elements of lowest order to approximate the stress tensor, continuous piecewise linear elements for the displacement, and piecewise constants for the rotation. In the case of mixed boundary conditions, the trace of the displacement on the Neumann boundary can be approximated by continuous piecewise linear elements on an independent partition of that boundary whose mesh size needs to satisfy a compatibility condition with the mesh size of the triangulation of the domain.
The most popular approach in applications is based on the mixedformulation, with pressure and velocity as unknowns. It is well-known that the Galerkin scheme associated to this formulation is not always well-posed and stability is ensured only for certain combinations of finite element subspaces. In this framework, several stabilization methods have been proposed in the literature. We consider a stabilized mixed finite element method introduced by Masud and Hughes in  for isotropic porous media. This method is based on the addition of suitable residual type terms to the standard dual-mixed approach. The resulting scheme is stable for any combination of continuous velocity and pressure interpolations, and has the singularity that the stabilization parameters can be chosen independently of the mesh size. This property was already present in the modified mixedformulation introduced in  for second order elliptic problems. The stabilization introduced in  was also applied in  to analyze a mixed discontinuous Galerkin method for Darcy flow. A similar idea is used in  to derive several unconditionally stable mixed finite element methods for Darcy flow. Finally, concerning the a posteriori error analysis of the method proposed in , a residual based a posteriori error estimate of the velocity in L 2 -norm was derived in .
Mixed finite element methods are typically used in linear elasticity to avoid the effects of locking. They also allow to approximate directly unknowns of physical interest, such as the stresses. We consider here the mixed method of Hellinger and Reissner, that provides simultaneous approxima- tions of the displacement u and the stress tensor σ. The symmetry of the stress tensor prevents the extension of the standard dual-mixedformulation of the Poisson equation to this case. In general, the symmetry of σ is imposed weakly, through the introduction of the rotation as an additional unknown, and stable mixed finite elements for the linear elasticity problem involve many degrees of freedom (see, for instance, ).
As it was said in the previous sections, a mixedformulation usually rep- resents a better and a more efficient way to predict the behaviour of shell elements. Therefore, a 4-node mixed shell element has been chosen to be im- plemented, see Figure 7.6. The element has been proposed by K.J. Bathe and E.N. Devorkin in reference (DB84), and exhibits the following charac- teristics: “(i) The element is able to represent the six rigid body modes, (ii) it also can approximate the Kirchhoff-Love hypothesis of negligible shear de- formation effects and can be used for thin shells, and (iii) the element does not contain spurious energy modes”(DB84). When talking about the six rigid
Despite DIR is widely used in the medical imaging community, the mathematical and numerical analysis of DIR remains understudied. The DIR continuous problem has been formulated using mainly three approaches: minimization of similarity measures (with or without constraints), as an optimal mass transport problem [22, 31, 11], or as a level set segmentation-registration combined problem [42, 17]. The problem of minimizing similarity measures has been studied in [5, 43], where the direct method of calculus of variations has been used to establish existence of solutions. The optical flow formulation, an associated problem which can be seen as a sequence of registration problems in time, was proposed by Horn & Schunk in 1980 , and has been the subject of analysis from an optimal- control problem point of view [7, 27]. Well-posedness of optical flow schemes has been established for Dirichlet boundary conditions under reasonable assumptions [18, 41]. Besides providing existence and uniqueness of the solution, by assuming only uniform boundedness on the images, these studies show that the solution is a step-wise diffeomorphism, which is a desirable regularity property when it comes to warping images. The analysis of the numerical schemes proposed to solve similarity- minimization formulations has received less attention. A noteworthy approach is the work of P¨ oschl et al. , where both the continuous and discretized problems are analyzed, and a solution is found using a primal finite-element approximation that is shown to be convergent. However, the analysis is restricted to polyconvex energy densities (both for the similarity measure and regularizer) and volume-preserving transformations, and does not account for the convergence of the transformation gradients and stresses. A more traditional Galerkin approach has been introduced in  for optimal- control-based registration, but requires a considerable degree of regularity (H 2+δ ) of the target and reference image functions, not required by other traditional formulations. While most approaches to DIR problems are based on primal formulations, a mixedformulation of the similarity minimization problem has been proposed in the setting of fluid registration schemes [12, 35], where a sequence of incompressible Stokes problems are solved to find the optimal displacement and pressure fields. While directly solving for the pressure field, which is desirable to understand the mechanical behavior of the images being registered, limited analysis has been provided to understand the well-posedness of the continuous problem and convergence of numerical discretizations of mixed formulations of DIR problems that use elastic regularizers.
We introduce a new preconditioning technique for iteratively solving linear systems arising from finite element discretization of the mixedformulation of the time-harmonic Maxwell equations. The precondi- tioners are motivated by spectral equivalence properties of the discrete operators, but are augmentation free and Schur complement free. We provide a complete spectral analysis, and show that the eigenvalues of the preconditioned saddle point matrix are strongly clustered. The analytical observations are accompanied by numerical results that demonstrate the scalability of the proposed approach. Copyright q 2007 John Wiley & Sons, Ltd.
to agglomerate particles during granulation giving rise to a MGS relatively lower than that of batch I formulation. Assessing the flow properties of the granules showed that the flow rate of granules was consistent with the granule size as larger particles generally flow faster. Hence batch I formulation had a better flow rate compared to batch II formulation. Similarly, bulk and tapped densities of batch I granules were relatively lower compared to that of batch II granules because of lower degree of packing associated with larger particle size and this translated to better flow rate due to a greater degree of porosity. The granule prop- erties obtained for batch I formulation translated into better tableting properties for the formulation in terms of crush- ing strength, tensile strength, tablet density and friability. The differences observed in the tableting parameters of crushing strength and disintegration time of both formu- lations was statistically significant at p < 0.05. The higher crushing and tensile strength values obtained for batch I formulation can be attributed to a greater degree of bond- ing area and bonding strength per unit area occurring dur- ing compression. It implies therefore that the formulation containing maize starch as a multifunctional excipient was more compressible and compactable leading to a better tabletability of the formulation. This is consistent with the findings of Nasipuri (16) and Deshpande and Panya (17) who
measurements are used to retrieve aerosol volume size dis- tributions (from 0.05 to 15 mm), spectral complex refractive index (m(l) − ik(l)) and single scattering albedo (w(l)) at low solar elevations (solar zenith angle between 50° and 80°), following a flexible inversion algorithm developed by Dubovik and King  (version 1.0, inversion products). This algorithm uses models of homogeneous spheres and randomly oriented spheroids [Dubovik et al., 2002]. Recently a new version of this inversion algorithm has been developed (Version 2.0), where the most significant modi- fication is the use of a spheroid mixture as a generalized aerosol model (representing spherical, nonspherical, and mixed aerosols) [Dubovik et al., 2006] and replacing the spherical and spheroid models used separately up to now. In this vein, Version 2.0 provides parameterization of the degree of nonsphericity (sphericity parameter), as well as the same set of retrieved aerosol parameters given in Version 1.0. Another important improvement in Version 2.0 is the assumption of a dynamic spectral and spatial satellite and model estimation of the surface albedo, including the bidirectional reflectance distribution function (BRDF), in place of assumed surface reflectivity [Dubovik et al., 2002]. The BRDF Cox‐Munk model over water [Cox and Munk, 1954] was used, which takes into account the wind effect over water using wind speed data from the National Centers for Environmental Prediction/National Center for Atmo- spheric Research (NCEP/NCAR) database (NOAA Opera- tional Model Archive Distribution System server at NCEP). For land surface covers, the Lie‐Ross model was adopted
Most Network growth models are based on the preferential attachment model . We are interested in a matrix formulation of a general class of prefer- ential attachment. In this paper we present a theoretical framework. We include in-degree and Personalized PageRank , . To our knowledge the first models of preferential attachment based on PageRank were  and . Both models are based on the usual personalization vector, i.e. v = 1/n. We improve the fundamentals of models that use PageRank by including a general personalization vector in our description.
The results of the two school cases equilibria are shown in Table 1 and Figure 2. It is well known from the product di¤erentiation literature that the private equilibrium is not e¢cient. To decrease the intensity of tuition competition, pro…t maximizing schools will di¤erentiate their quality in an excessive way. The main message emerging from our results is that the mixed oligopoly restores e¢ciency. The equilibrium is the same whether the public school o¤ers highest or lowest quality. This conclusion is perfectly in line with the results obtained by Cremer Marchand and Thisse (1991) in a Hotelling setting with quadratic transportation cost.
PAni-EB has been extensively employed as anticorrosive additive due to its very high stability and good redox properties, which are able to passivate the metal surface through an anodic protection mechanism. However, this CP is extremely insoluble, forming agglomerates when it is in contact with solvents. The key for the good miscibility with our paint formulation was achieved by reducing the PAni powder size, which was subsequently mixed with chloroform to form a very fine colloidal dispersions, as suggested by B. Wessling in reference . Another important requirement for the use of PAni-EB in paint formulations is the addition of a very low concentration. We found that 0.3 wt.% is better than 1.0 or 1.5 wt.%. Thus, concentrations higher than 0.3 wt.% provoke the formation of polymer agglomerates in the film surface giving worse adherence and high permeability to the coating (this observation is detailed in section 6.3.2). In contrast, PAni-ES has excellent miscibility with solvent-based alkyd paints due to its good dispersion properties in xylene solutions. We did not observe the formation of agglomerates in the coating surface when PAni-ES 1.0 wt.% was employed. However, the best miscibility was obtained when we applied the partially oxidized PTE as additive. The PTh derivative synthesized in our laboratory showed a very high compatibility with the alkyd paint formulation, giving very smooth coating surfaces for concentrations ranging between 0.3 to 1.5 wt.%. This high compatibility should be attributed to the fact that PTE is soluble in the paint solvents (i.e. alcoholic solvents and aromatic hydrocarbons).
with values in the category of mixed Hodge diagrams of dga’s (see Theo- rem 5.3.6). Both functors are known to extend to functors defined over all complex algebraic varieties. We provide a proof via the extension criterion of [GN02], which is based on the assumption that the target category is a cohomological descent category. This is essentially a category D together with a saturated class W of weak equivalences and a simple functor s send- ing every cubical codiagram of D to an object of D, and satisfying certain conditions analogous to those of the total complex of a double complex. The primary example of a cohomological descent structure is given by the category of complexes C + (A) of an abelian category A with the class of quasi-isomorphisms and the simple functor s given by the total complex. The choice of certain filtrations originally introduced by Deligne leads to a simple s D for cubical codiagrams of mixed Hodge diagrams, defined level-
To the best of our knowledge, there is no existing literature on the incorporation of CuO nanoparticles in bio-based matrices and, in particular, in PHAs matrices. Therefore, the main goal of this work was to develop and characterize the antimicrobial performance and physicochemical properties of PHBV nanocomposites and bilayer films containing CuO nanoparticles. Concretely, melt mixed nanocomposites of commercial PHBV3 (3% mol valerate) and mixed microbial cultures derived PHBV18 (18% mol valerate) with different CuO loadings were prepared and the effect of adding an electropun PHBV18/CuO coating over compression molded PHBV3 films on the mechanical, thermal, barrier, biodisintegration properties and more interestingly on the antibacterial and antiviral activity against the food-borne pathogens Salmonella enterica, Listeria monocytogenes and Murine Norovirus were studied.
As an example use case of mixed instrumentation, we were able to analyze process- to-process communication, building a bandwidth matrix, and then validate our results against mpiP output from the same runs. Another natural use case appears while an- alyzing performance of multi-level parallel applications on a hybrid cluster. When an application is developed using both shared-memory and message-passing programming techniques, we will often desire to take measurements related to both classes of events –at the same time; for instance, hardware events interspersed between message-passing function calls.
MultiPARTES  is a FP7 project aimed at developing tools and solutions for building trusted embedded systems with mixed criticality components on multicore platforms. The approach is based on an innovative open-source mul- ticore platform virtualization layer based on the XtratuM hypervisor . A software development methodology and its associated toolset   is being de- veloped in order to enable trusted real-time embedded systems to be built as partitioned applications, in a timely and cost-effective way.
(20) Equation (18) is similar to that of classical damage formulations , nevertheless it should be noted that in our formulation it is a result of isotropy and Valanis-Landel decomposition. It is noteworthy that the undamaged energy used in most of the models [2, 22, 27, 34] cannot be obtained experimentally whereas the damaged one W 0 d can be readily determined. In addition, the virgin loading curve cannot be obtained from a hyperelastic curve because itself involves damage which is a dissipative process.
Deformable Image Registration (DIR) is a powerful computational method for image analysis, with promising applications in the diagnosis of human disease. Despite being widely used in the medical imaging community, the mathematical and numerical analysis of DIR methods still has many open questions. Further, recent applications of DIR in- clude the quantification of mechanical quantities in addition to the aligning transformation, which justifies the development of novel DIR formulations for which the accuracy and convergence of fields other than the aligning transformation can be studied. In this work we propose and analyze a primal, mixed and augmented formulations for the DIR prob- lem, together with their finite-element discretization schemes for their numerical solution. The DIR variational problem is equivalent to the linear elasticity problem with a source term that has a nonlinear dependence on the unknown field. Fixed point arguments and small data assumptions are employed to derive the well-posedness of both the continuous and discrete schemes for the usual primal and mixed variational formulations, as well as for an augmented version of the later. In particular, continuous piecewise linear elements for the displacement in the case of the primal method, and Brezzi-Douglas-Marini of order 1 (resp. Raviart-Thomas of order 0) for the stress together with piecewise constants (resp. continuous piecewise linear) for the displacement when using the mixed approach (resp. its augmented version), constitute feasible choices that guarantee the stability of the asso- ciated Galerkin systems. A priori error estimates derived by using Strang-type lemmas, and their associated rates of convergence depending on the corresponding approximation properties are also provided. Numerical convergence tests and DIR examples are included to demonstrate the applicability of the method.
The use of fungi as insecticides has been widely studied, and many commercial products have been developed, most of them based on the funguses Beauveria bassiana and Metarhizium anisopliae (Faria and Write, 2007). However, these new prod- ucts have to compete with chemical insecticides, which in general, have a faster effect, are more stable, cheaper, easier to apply, and can be stored for longer periods under variable environmental conditions without the loss of effectiveness. A key factor for a biological formulation to be com- mercially successful is to maintain the viability and virulence of the infective units during storage and application. In general, before application, it is required for the product to keep its properties for at least a year under varied environmental conditions (Jackson et al., 2010). Exposure to high temperatures during transport and storage is a critical issue. In addition, the conidia humidity content and moisture conditions of the atmosphere during storage are also key factors to maintain viability (Hong et al., 1997; Blanford et al., 2012). Some authors have observed that the addition of silica gel to oil formulations of M. anisopliae favors conidial viability due to the capacity of this material to adsorb humidity (McClatchie et al., 1994; Moore et al., 1996). Other studies have shown that the use of different desiccant materi- als enhances the conidial thermotolerance of the entomopathogenic fungi Isaria fumorosea and correlate this with water potential (Kim et al., 2014a). Thus, it is important for the ingredients in the formulation to have properties that favor the high temperature resistance of infective units and that can maintain suitable humidity conditions. Several types of mycoinsecticide formulations have been developed, and the most common are the technical concentrates in the form of fungus- colonized substrates, followed by wettable pow-
ABSTRACT. Culture media, designed to grow as many different genera of microorganisms as possible, appear to require a nutrient base composed of polypeptides, oligopeptides and amino acids. Most modern culture media are composed by a mixture of different protein hydrolysates obtained from different pr o- teins and enzymes in order to provide the widest spread of peptides. It seems that large peptides have a role in recovering (or resuscitating) of nutritionally fastidious organisms. The aim of this work was to de- velop a mixed nutrient base with local raw materials and technology. It was made physical, chemical and microbiological characterisation of three mixtures of the protein hydrolysates and it was selected the best one to promote the growth of a wide range of the microorganisms in different culture media.
Even though there are many software to perform linear-mixed models (LMMs) of repeated measures (RM) and/or longitudinal data, the procedure for performing them is difficult and frequently requires the supervision from a Statistician. Given this LMM complexity, the current solution for many researchers is to skip them and conduct alternative – with reduced accuracy and sometimes inappropriate - approaches such as RM-ANOVA. Considering the fact that many biomedical researchers employ data such as longitudinal or hierarchical, that should be analyzed using LMM, our Shiny app is intended to guide researchers with low-to-medium statistical knowledge to understand (via an example) and/or perform (via their own data) the results of a LMM problem. The R code of this Shiny app will be accessible to the overall community. This is of relevance considering that: i) Researchers will have the possibility to use the Shiny app; ii) Researchers and programmers will have the possibility of modifying the code and adapting it to their needs; iii) Researchers and programmers will have the possibility to use the core R code design to generate alternative analysis beyond LMMs.