This belongs to a series of **algorithms** known as Probabilistic Model Building Genetic Algorithm (PMBGA) [8], which are characterized by discriminating the significant contribution attributes in the construction of an optimal individual. The validation indexes for determining the performance of an individual is the “Fitness” function, which in turn depends on the problem to be solved. The implementation considers an individual with the best performance when the value of this function is minimized. Because in this work we are dealing with a classification problem, besides using the RMSE, we decided to also consider the use of classification error and correlation. With that in mind, we can assemble an initial brief of a fitness function (4).

11 Lee mas

To detect the speciﬁc physical characteristics of the aircraft special ﬂight maneuvers are performed. These maneuvers are divided into longitudinal and lateral motion and performed separately to acquire the required information. Nowadays engineers consider new optimized ways to get model information to reduce ﬂight test campaign time and costs. To achieve this, it is possible to design maneuvers in which ﬂight control surfaces are deﬂected simultaneously. Control commands are generated by simultaneous excitation of the control surfaces. Relative Peak Factor RPF minimization is preferred because it reduces the control surface deﬂections by minimizing the energy input. Multisine excitations can be used to achieve this purpose. This method is based on small perturbations to keep the aircraft as close as possible to the equilibrium state while keeping the accuracy of the estimate model. In order to ﬁnd signals with these characteristics, Schroeder developed a model that proves that a phase-shifted sum of sinusoids can be used as input signals with low peak factor and good frequency content [6]. A speciﬁc power spectrum must be obtained to accomplish this aim. This task can be carried by using engineering based approach based on experience or more eﬃciently by using Lichota Method for maximizing the information stored from the aircraft response. Lichota method is a bio-inspired method based in Genetic Algorithm (GA) **optimization** technique to obtain adequate input energy content for a speciﬁc aircraft model [7, 8, 9, 10, 11, 12].

12 Lee mas

timization problem. Since our problem involves the L 1 -norm, it corresponds to a non-differentiable minimization problem. Then, we propose a local Huber regular- ization technique to deal with the non-differentiable term. Further, we propose to solve the **optimization** problem by using a multigrid **optimization** (MG/OPT) algo- rithm. This algorithm was introduced as an efficient tool for large scale **optimization** problems (see [25, 23]). In [23] a multigrid **optimization** method is also presented for the **optimization** of systems governed by differential equations. The MG/OPT method focuses in **optimization** problems which are discretized in different levels of discretization generating a family of subproblems of different sizes. The idea of the algorithm is to take advantage of the solutions of problems discretized in coarse levels to optimize problems in fine meshes. The efficient resolution of coarse prob- lems provide a way to calculate search directions for fine problems. Our purpose in this work is to propose, implement and analyze the MG/OPT algorithm for the res- olution of nonsmooth problems with a finite element scheme. As the name implies, the application of the MG/OPT method involves an underlying **optimization** algo- rithm at each level of discretization. Due to the limited regularity of the functional J and the p- Laplacian involved therein, we propose a class of descent **algorithms** such as the gradient method and a preconditioned descent algorithm (see [12]) as the underlying **optimization** **algorithms**. Particularly, the preconditioned descent algorithm was proposed to solve variational inequalities involving the p-Laplacian. Hence, our aim is to take advantage of the computational efficiency of the multigrid scheme and combine it with a suitable **optimization** algorithm for type problems (1.2).

71 Lee mas

descriptors and possible functional forms of the QSAR equations, so GFA appears to be the appropriate choice for this problem. In addition, GFA ends up with an equation, therefore allowing the interpretation of the model in physical terms. GFA is also able to discrimi- nate between good and bad descriptors and to select the former. These two properties are not available for other **optimization** **algorithms**, like neural networks; when used for similar problems, they are normally supple- mented with a genetic algorithm to search and select for descriptors, the neural network being only used for regression purposes. 12

9 Lee mas

Abstract. In this paper, we solve some benchmarks of Set Covering Problem and Equality Constrained Set Covering or Set Partitioning Problem. The resolution techniques used to solve them were Ant Colony **Optimization** **algorithms** and Hybridizations of Ant Colony Optimiza- tion with Constraint Programming techniques based on Arc Consistency. The concept of Arc Consistency plays an essential role in constraint sat- isfaction as a problem simplification operation and as a tree pruning technique during search through the detection of local inconsistencies with the uninstantiated variables. In the proposed hybrid **algorithms**, we explore the addition of this mechanism in the construction phase of the ants so they can generate only feasible partial solutions. Computational results are presented showing the advantages to use this kind of addi- tional mechanisms to Ant Colony **Optimization** in strongly constrained problems where pure Ant **Algorithms** are not successful.

7 Lee mas

Fitness Landscape analysis [3] methods can be used for estimating an optimiza- tion problem’s hardness. As we see in Figures 2 and 3, the fitness landscape of the here used problem instances Ex1 and Ex2 are very rugged, which makes it very hard for **optimization** **algorithms** to find optimal solutions.

9 Lee mas

Nowadays, different **optimization** **algorithms** are based on behavior observed in nature. These **algorithms**, referred to as bio-inspired, have been proposed as an alternative to traditional **optimization** methods. The process of foraging, as performed by living beings, is an example of the behaviors which can be used as references for the development of **optimization** **algorithms** [8]. Notably, good techniques for finding meals bestow the being with survival advantages, which is precisely what an **optimization** algorithm seeks [8].

7 Lee mas

The Financial Project Scheduling Problem (FPSP), also known as the Capital Constrained Problem, is an **optimization** problem considered of crucial importance to support financial decisions under the assumption of certainty. The problem defines a number of activities that require scheduling. It also allows certain interdependences to occur among these activities in the form of precedence constraints. The goal is to identify a schedule which optimizes a suitable objective function. Usually, there are three main variants to consider: (i) project scheduling with time-dependent costs; (ii) project scheduling with constrained resources; and (iii) project payment scheduling with constrained capital. In the first variant, the total expected cost-time minimization is established as the objective function. In the second variant also the capital or resource constraint is required to process an activity. As resources are limited, additional restrictions emerge to schedule the activities. In this context, machine scheduling model seems to be a suitable choice. Indeed, the objective function aspires to determine a schedule that minimizes makespan (time until the last activity is completed). Finally, the last variant’s objective consists of assigning modes for activities and payments so that the net present value (NPV) under the of capital availability constraint is maximized. In a review of Icmeli et al. [1993], three related project scheduling problems are described: the project scheduling problem with constrained resources, the time and cost trade-off problem, and the payment scheduling problem. The use of metaheuristic **algorithms** allows the inclusion of multifactor levels for a number of project characteristics [Smith-Daniels et al. 1996]. Thus, surveys offered by Hartmann and Kolisch [2000] and Kolisch and Hartmann [2006] on the heuristic and metaheuristic applications to the exploration of positive and negative cash flows that investment projects generate.

24 Lee mas

multi-point crossover. If all that happens is a combination of the parents’ genes, then the system never looks outside the parents’ population for better solutions. To enable dramatic changes in the population of diffusers, mutation is also needed. This is a random procedure whereby there is a small probability of any gene in the child sequence being randomly changed, rather than directly coming from the parents. Selecting diffusers to die off can be done randomly, with the least fit (the poorest diffusers) being most likely to not be selected. By these principles, selection, mutation and crossover, the fitness of successive populations should improve in the **optimization** process. This is continued until the population becomes sufficiently fit so that the best shape produced can be classified as optimum.

10 Lee mas

Due to their implicit parallel search, evolutionary **algorithms** (EAs) are suitably fitted to deal with JSSP [7], [14], [24] as well as seeking solutions in multiobjective **optimization** [3], [4], [5], [6], [10], [15]. The present work investigates the ability of the CPS-MCMP method, a co-operative population search approach allowing multiple crossovers applied on multiple (2 or more) parents, to find non-dominated points and contrasts its performance against other recombination schemes when building the Pareto front.

12 Lee mas

proportional to that structure's performance relative to the rest of the population.To simulate selection, the phenotypes are scored according to a set of fitness criteria. When the system is being used to solve an **optimization** problem, the traits are interpreted as solution parameters and the individuals are scored according to the function being optimized. This score is then used to cull the population in a way that gives higher scoring individuals a greater chance of survival. The aptitude of an individual is closely related to the value of the function in the point being represented by the individual. After the selection step, the surviving gene pool is used to produce the next generation by a process analogous to mating. Mating pairs are selected by either random mating from the entire population, some form of inbred mating, or assertive mating in which individuals with similar traits are more likely to mate. The pairs are used to produce genetic material for the next generation by a process analogous to sexual reproduction. In a simple GA, the whole population is replaced by a new set of individuals each generation. The new set of individuals is produced in pairs. In order to produce two new individuals, a pair of parents is selected from the current population. Those individuals with a better aptitude have more chances of being selected. Once a pair of individuals is selected, crossover and mutation are applied. The crossover consists of constructing a pair of new individuals by taking parts of the genetic material from both parents. The expected effect is the combination of the characteristics present in both parents. In the simplest case, the genetic material of an individual consists of the string and the crossover consists of randomly taking a point in which both parents can simultaneously be divided and then joining the first part of the first parent with the second part of the second parent. The second individual can be constructed with the remaining parts of the parents’ genetic material.

130 Lee mas

The scope of this Thesis focuses on a particular class of disasters with an equally con- cerning increase of its severity and frequency in the last few years: wildfire events, understood as those large-sized fires not voluntarily initiated by the human being. Despite the variety of initiatives, procedures and methods aimed at minimizing the impact and consequences of wildfires, several fatalities occurred in the last few years have put to question the effectiveness of current policies for the allocation of fire- fighting resources such as aircrafts, vehicles, radio communication equipment, supply logistics and fire brigades. A clear, close exponent of this noted deficiency is the death of eleven firefighters occurred in a 130 km 2 forest wildfire in Guadalajara (Spain) in 2005, which was officially attributed to a proven lack of coordination between the command center and the firefighting crew on site, ultimately resulting in radio iso- lation among the deployed teams. The reason for this missed coordination in the management of firefighting resources can be questioned by authorities and involved stakeholders, but it undoubtedly calls for the study and development of algorithmic tools that help operations commanders optimally perform their coordination duties. Unfortunately, the economical crisis mostly striking on countries from the Southern Europe has reduced significantly national budgetary lines for wildfire prevention and suppression to the benefit of deficit-reduction programs. As a consequence of these budget cuts, cost aspects have lately emerged as necessary, relevant criteria in opera- tions planning: from an **optimization** perspective, firefighting resources are allocated so as to achieve the maximum effectiveness against wildfires, subject to the available budget upper bounding the overall economical cost associated to the decisions taken by commanders and decision makers. Although the cost constraints in this problem are obvious and well-reasoned, in practice management procedures for firefighting resources do not follow cost-aware strategies, but are instead driven by the limited capacity of the human being to dynamically perform decisions in complex, heteroge- neous scenarios.

123 Lee mas

In the last 50 years, many methods have been developed to solve combinatorial **optimization** problems. The Simplex is used to optimize linear functions, the random searches, the dynamic programming, the brunch and bound methods, among others, are used to solve nonlinear functions. But these techniques are not enough to solve problems belonging to NP-complete problem class and with a growing complexity. To mitigate this weakness, the metaheuristics are used. A metaheuristic is a method with a high abstraction level that can analyze big search spaces, keeping an equilibrium between diversification and intensification of the search. Besides it provides very good results although those solutions can not be optimal.

12 Lee mas

The reminder of this section establishes the history and introduction to the SAN networks, and benefits of this technology. Section 2 explains some key concepts and problems of Fibre Channel networks. Section 3 surveys some related previous work applying ACO to networking problems. Sections 4 describe basis for the **algorithms** proposed and the algorithm itself. And the final sections cover acknowledgments, future work and conclusions.

10 Lee mas

In general, these search techniques have a high computational cost because state spaces grow in a factorial or exponential manner. Oftentimes it is impossible to perform a thorough analysis of the solution space, so **algorithms** that use heuristics are required to assess the cost of the states and process first the nodes that are most likely to yield the best results [3] [4].

10 Lee mas

After computing a finite number of iterations using any **optimization** al- gorithm A it will be possible to discard only a finite number of points (it is a Lebesgue null set) where there is no a global minimum. However, the in- formation available, understood as the measure of the set where the global minimum can be found is the same as at the start of the execution of the algorithm. Therefore, there is no algorithm A which ensures convergence to the global optimum. In fact, a given algorithm does not converge to optimal solution with probability 1 . This example puts us in a specific situation: if the aim is to work with **algorithms** which have certain convergence proper- ties, the objective functions must be continuous. In this regard, Torn and Zilinskas (1989) show that an algorithm converges for any problem P , whe- re the objective function is continuous and Ω is an arbitrary compact set, if and only if the set of solutions that the algorithm evaluates is dense, which means, for all y ∈ Ω and given any positive number δ > 0, there is always an iteration t where the algorithm evaluates a certain point x 0 sufficiently close to y, which can be expressed as ky − x 0 k < δ. In other words, the algorithm must necessarily explore the whole search space in order to ensure it con- verges to the global optimum.

234 Lee mas

With the growth of airspace congestion, there is an emerging need to implement these types of tools to assist the human operators in handling the expanding traffic loads and improve flow efficiency. Unfortunately, the CDR has proven to be a hard problem to solve. To give some idea, the way in which to represent the actual trajectory of an aircraft is by means of a dynamic model that has to take into account, as an example, the following relationships: speed of the aircraft will depend on the wind direction and altitude on which it flies (such that the higher an aircraft flies, the lesser the air is around it and thus it needs to go faster to maintain its position); acceleration depends on the speed (e.g., at lower speeds, a plane can reach higher acceleration ratios) and altitude, and so on. Notice that the aircraft is losing mass throughout the flight as fuel burns, and this influences the speed and acceleration of the aircraft (and, viceversa, the speed influences the consumption of fuel and thus the mass loss), etc. Good introductions to flight dynamics modelization can be found in [101, 133, 225]. Finally, CDR has to deal with the simultaneous trajectories of (possibly) many aircraft. Moreover, we must bear in mind that given the intended trajectories, captured in the flight plans, some uncertainty regarding the actual trajectories of the aircraft is unavoidable, which makes CDR harder to solve. Trying to address all these issues within a mathematical **optimization** model would lead today to an unmanageable problem (in terms of computing effort, i.e., elapsed time and memory requirements).

199 Lee mas

The behavior of **algorithms** is tested using four dynamic functions (Onemax, Royal-Road, P-Peaks, and MMDP) built with the XOR-DOP benchmark genera- tor [10], thus addressing different difficulties: epistasis, multimodality, and decep- tion. We use binary strings of 100 bits, separated in 25 contiguous building blocks (BBs) of 4 bits for the Royal-Road Problem, P = 50 peaks for the P-Peaks Prob- lem instance, and a MMDP with k = 16 deceptive subproblems. OneMax and Royal-Road are unimodal DOPs (only one suboptimal solution), while P-Peaks and MMDP are multimodal DOPs (multiple suboptimal solutions). For each problem instance, we also test distinct change modes (cyclic, cyclic with noise, and random) and change severities (ρ ∈ {0.05, 0.1, 0.2, 0.5, 1.0}). The higher the ρ value the severer the change; ρ = 1.0 means a random severity in the range [0.01, 0.99]. For all DOP instances the change frequency is τ = 10 generations, the cycle length is 5 changes, and the noise sums a severity of 0.05.

8 Lee mas

To meet the requirements of control **algorithms** for energy **optimization**, the control system has been implemented in Linux operating system, which can extend the battery lifetime of multimedia mobile devices depending on the user requirements while maintaining a reasonable QoE. This mechanism includes an OS-level estimator that works as the feedback of the control system. PAPI-based estimator as the first approach to estimate power consumption is used to compare with the OS-level one in order to accurately calculate estimation values. In this chapter, the experimental results, including the validation and evaluation of the two estimators, the controllers implementation, the battery life time extension and the test of disturbance will be given in four parts: for the first part, the accuracy of PAPI-based and OS-level estimators have been compared and their features for the control system are stated, the overhead of the OS-level estimator will be given to show its real performance. Then, different classic controllers have been implemented in both system simulator and real system; their behaviors also are compared in order to verify the correction. Thirdly, the potential battery life extension achieved by PCG will be shown, one example of power budget profiles will be listed in order to compare its features with other Linux original governors. Finally, the effect of power consumption variations has been tested in both simulation and implementation.

142 Lee mas

The carried out experiments apply the **optimization** of the architecture of the neural network proposed in [13] and the fine-tuning of the parameter configura- tion of the trained model openly provided by the authors on [1]. Furthermore, the proposed ensemble methodology is applied as well as the genetic algorithm. The structure of this section is as follows. First of all, Subsection 3.1 shows the software and hardware that have been used. Then, the tested image dataset is specified in Subsection 3.2. After that, the obtained results from the parameter configuration optimisation process are described in Subsection 3.3. And finally, 3.4 exhibits the ensemble process results.

11 Lee mas