The industrial engineers who have obtained higher-level degrees have played and play a decisive role as transmitters and introducers of progress. The balanced combination of a solid scientific and technical education, different applied technologies and disciplines within the economic-business and social-humanistic areas, the understanding that comes from the reality of the industrial sector (from a broad and global perspective) and the ability to interrelate various disciplines involved in complex systems (create, develop and manage), makes these studies a current and innovative model, applied by many European universities. The formation of a Generalist Industrial Engineer with knowledge of the different techniques and the ability to integrate the industrial thinking of Europe cannot be achieved with a university period of less than five years. The best education is the current model: an integral cycle within which from the beginning one gets the basic knowledge and "a good foundation", so later one will be able to apply the specific techniques in-depth. Such learning process ensures that no possible technological innovations escape, thus requiring the understanding of the overall picture. (Romero, F, 2003).
Cabot, J.; Clariso, R.; Guerra, E.; de Lara, J. (2008). An invariant-based method for the analysis of declarative model-to-model transformations. Model Driven Engineering Languages and Systems, Proceedings; Lecture Notes in Computer Science; 11th International Conference on Model Driven Engineering Languages and Systems, 28 th September – 03 rd October, 2008, Toulouse, France, 5301, 37-52.
In this section we present the main features of a component-based architecture for the development of adaptive web-based education systems that support such a meta- model. The proposed architecture (c.f. Figure 1) is composed of three main blocks following a typical three-tier scheme: a Web Engine, an EML Execution Engine and a Web Client. In addition, there is a Setup and Administration function devoted to the management of the educational resources and the instantiation of the elements.
The next chapter discusses the diﬀerent model types available and the transformations between them. Chapter 3 details analysis of the system prop- erties that can be inferred from their models (gain, stability, structure). Chap- ter 4 describes in more detail the objectives of control and the alternatives in solving the associated problems brieﬂy outlined in this introductory chap- ter (closed-loop properties, feedforward control, etc.). Chapter 5 presents the methodologies for controlling MIMO plants based on SISO ideas by setting multiple control loops, decoupling and creating hierarchies of cascade control. Chapter 6 describes some centralised control strategies, where all control sig- nals and sensors are managed as a whole by means of matrix operations. Pole placement state feedback and observers are the main result there. Chapter 7 deals with controller synthesis by means of optimisation techniques. The linear quadratic Gaussian (LQG) framework and an introduction to linear fractional transformation (LFT) norm-optimisation (mixed-sensitivity) are covered. Chapter 8 discusses how to guarantee a certain tolerance to mod- elling errors in the resulting designs. It deals with the robustness issue from an intuitive framework and presents the basics of robust stability and robust performance analysis. Mixed sensitivity is introduced as a methodology for controller synthesis. Lastly, Chapter 9 deals with additional issues regarding implementation, non-linearity cancellation and supervision.
The current phase of the history of the O-O is characterized by the emphasis which has been moved from programming to analysis and design, and because there is conscience of the problem of the open systems and of the need for standards. There exists an important trend toward the incorporation of methods guided to objects, in the systems of database management as well as in the structured methods of existing management and the CASE tools that give support to systems of database and structured methods. The proposal of this work consists in the specification of a process and behavior model of systems for a CASE tool that, based on the analysis by scenarios and in the classification of objects in a system in objects of application and of interface, synthesizes the
Abstract. The ASSERT project defined new software engineering methods and tools for the development of critical embedded real-time systems in the space domain. The ASSERT model-driven engineering process was one of the achievements of the project and is based on the concept of property- preserving model transformations. The key element of this process is that non-functional properties of the software system must be preserved during model transformations. Properties preservation is carried out through model transformations compliant with the Ravenscar Profile and provides a formal basis to the process. In this way, the so-called Ravenscar Computational Model is central to the whole ASSERT process. This paper describes the work done in the HWSWCO study, whose main objective has been to address the integration of the Hardware/Software co-design phase in the ASSERT process. In order to do that, non-functional properties of the software system must also be preserved during hardware synthesis.
After that, we decided to change the modelling environment to have more control in the programming process and the possibility to use a variety of toolboxes when using the model. MATLAB was chosen because it is a high level scientific and engineering computational tool, a lot of specialized tool- boxes can be incorporated, a custom graphic user interface can be easily built so non-technical users can use it, it provides versatility and integration with other tools (such as CUDA, optimization algorithms, neural network toolbox...), and also because it has been used traditionally in the process and system engineering field. This modelling environment offers a lot of possibilities for modelling agents. We explored the different options for rep- resenting agents and finally selected the vectorial programming paradigm. This is exposed at section 7.3.
Abstract—Social engineering is the attack aimed to manipulate dupe to divulge sensitive information or take actions to help the adversary bypass the secure perimeter in front of the information-related resources so that the attacking goals can be completed. Though there are a number of security tools, such as firewalls and intrusion detection systems which are used to protect machines from being attacked, widely accepted mechanism to prevent dupe from fraud is lacking. However, the human element is often the weakest link of an information security chain, especially, in a human-centered environment. In this paper, we reveal that the human psychological weaknesses result in the main vulnerabilities that can be exploited by social engineering attacks. Also, we capture two essential levels, internal characteristics of human nature and external circumstance influences, to explore the root cause of the human weaknesses. We unveil that the internal characteristics of human nature can be converted into weaknesses by external circumstance influences. So, we propose the I-E basedmodel of human weakness for social engineering investigation. Based on this model, we analyzed the vulnerabilities exploited by different techniques of social engineering, and also, we conclude several defense approaches to fix the human weaknesses. This work can help the security researchers to gain insights into social engineering from a different perspective, and in particular, enhance the current and future research on social engineering defense mechanisms.
Abstract. Intelligent computing systems comprising microprocessor cores, memory and reconfigurable user-programmable logic represent a promising technology which is well-suited for applications such as digital signal and image processing, cryptography and encryption, etc. These applications employ frequently recursive algorithms which are particularly appropriate when the underlying problem is defined in recursive terms and it is difficult to reformulate it as an iterative procedure. It is known, however, that hardware description languages (such as VHDL) as well as system-level specification languages (such as Handel-C) that are usually employed for specifying the required functionality of reconfigurable systems do not provide a direct support for recursion. In this paper a method allowing recursive algorithms to be easily described in Handel-C and implemented in an FPGA (field- programmable gate array) is proposed. The recursive search algorithm for the knapsack problem is considered as an example.
Other non-well defined issue related to design based on competences is the choices which must be done for each competence. For instance, it must be define the expected level of proficiency of student (domain levels) and it must be scheduled a progressive develop along degree by using the proper learning methodologies and assessment tools. Additional previous issue must be considered in the particular case of GC: their usually inaccurate description. The competence “team work” is a good example. When you say “student must be able to work in groups”, you are talking about a set of skills (e.g. conflict´s resolution, leadership, oral expression) what can be non-well define for all curriculum design actors (professor, students, employers). European [7-9] and non-European [10-11] authors have addressed these issues. All of them agree that, to improve the design, large competences must be split in more simplex ones and all of them must be related to LO.
Ivanović, A., America, P., & Snijders, C. (2012). Modeling customer-centric value of system architecture investments. Software & Systems Modeling. doi:10.1007/s10270-012-0235-2 Kaiser, M., & Royse, G. (2011). Selling the Investment to Pay Down Technical Debt: The Code Christmas Tree. 2011 AGILE Conference (pp. 175–180). IEEE. doi:10.1109/AGILE.2011.50 Kruchten, P. (2010). Software architecture and agile software development. Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - ICSE ’10 (Vol. 2, p. 497). New York, New York, USA: ACM Press. doi:10.1145/1810295.1810448
The KMAI System embraces the whole cycle of strategic information production, from the collecting process to the retrieval by the user. Part of the visualization of the system is in figure 1. It begins in the election of the digital sources to be monitored (Knowledge Engineering), separating structured from non-structured data (about 90%) and submitting them to differentiated treatments. Data obtaining is made through Collecting Agents connected to collections, each one representing a source of information, which can be from specific websites to documents storage directories (textual documents, spreadsheets, e-mails and reports in general) digitally existent in the organization.
This is a position paper in which we analyse the role that ontolo- gies can play in this transition. Anticipating the final conclusion of our analysis we believe that, to achieve the potential improvement provided by using a rigorous engineering process to build CSs, we are in need of a deeper, unified cognitive science —a solid theory of mental processes— that could sustain such endeavour. Engineering methods based on this science of the mind will lead to the synthesis of the two classes of engineering assets that are necessary for CS en- gineering (CSEng): design patterns —structural/behavioural aspects for cognitive architectures— and ontologies —concepts to bind i) the minds of the engineers and system stakeholders; ii) the mind of the engineer to the CS under construction; and iii) the mind of the CS to its world and the world of its user.
tem spectral efficiency. However, for analytical convenience, the fading channel coefficients corresponding to the original and the retransmitted packets were as- sumed independent and identically distributed random variables and, furthermore, queueing effects on the average packet delay were not taken into account. In (Liu et al., 2005a), the same authors also proposed a cross-layer design combining fi- nite buffer queueing at the DLC layer with AMC at the PHY layer and applied finite-state Markov chain analysis to derive analytical expressions for the packet loss rate and throughput. This paper, however, did not take into account the possible performance improvement from the ARQ protocol (i.e., unsuccessfully transmitted packets were dropped) and ignored correlation in the traffic arrival process. Wang et al. (2007) tried to solve some of the previous flaws by gen- eralizing the cross-layer combining of queuing with AMC in (Liu et al., 2005a) and the cross-layer combining of queuing with truncated ARQ and AMC in (Liu et al., 2004). However, similarly to (Liu et al., 2004) and (Liu et al., 2005a), they assumed a memoryless packet arrival process. Moreover, in order to facili- tate mathematical tractability of the queueing process, they relied on the rather unrealistic assumption of considering a time slotted system were only one frame was transmitted per slot, with each frame at the PHY layer containing at most one packet from the DLC layer. As it has been mentioned in Section 3.1, traffic burstiness was considered by Le et al. (2006b), where they provided an analytical framework for point-to-point wireless systems with infinite/finite buffer queueing and ARQ-based error control at the DLC layer, and AMC at the PHY layer. Nevertheless, infinitely persistent “pure” ARQ-based error control schemes were considered, while the generalization to more sophisticated truncated-HARQ pro- tocols was not addressed at all. Kang et al. (2009) considered a joint design approach where IR-based HARQ was associated with an AMC design at the PHY layer, although the queueing process was not faced at all.
The studies focused on proposing the adaptation of the FRAM through the combination with Model Checking are the most numerous in the research carried out to enhance the FRAM. Five publications were found in this regard (Duan et al., 2015; Tian et al., 2016; Yang et al., 2017; Zheng et al., 2016; Zheng and Tian, 2015). All of them use the model checking to examine all the paths through which the func- tions can be coupled, leading to functional resonance and deviations or failures of the performance of the system. After analysing each of them, it can be concluded that, in general, the main steps followed are similar. The differences appear in the process followed to translate in a mathematical model the functions defined qualitatively by FRAM, the mecha- nism to establish the safety constraints and the type of model checking used. However, all of them show that the combina- tion of Model Checking and FRAM facilitates the practical application of FRAM, since the paths of propagation of var- iability are examined automatically.
combined activity of several microorganisms and higher plants, which colonize five interconnected compartments. The main contribution of this thesis is in the engineering of the photosynthetic compartments of the MELiSSA loop. These photosynthetic compartments consist of a continuous photobioreactor for the culture of Arthrospira sp., and a number of sealed higher plant chambers. The first Arthrospira reactor has been already built and is in operation at the MPP. This work is contributing to an increase in the knowledge on its operation and characterization. While the Higher Plant Chamber (HPC) is still in the construction phase, the work of this thesis focuses on the collection of basic data for the culture of beet and lettuce. These data are then used in the design of the HPC prototype, which will be built and integrated within the MPP. Finally, the work evaluates the impact of the integration of these two compartments in the complete system, using a static mass balance model for assessing the nitrogen, CO 2 and O 2 balances.
• Optical analysis. This attack is widely used for reverse-engineering microchips and, therefore, it can be used for studying the internal hardware of an RFID tag. Before performing such an analysis, the external enclosure has to be removed, which involves using acid and, less frequently, a laser beam. Then, an optical or electron microscope can be used to analyze the hardware. An excellent example of optical analysis is described in , where the authors detail how they reverse-engineered the security of MIFARE Classic cards (the authors first performed an optical analysis, and then a communications protocol analysis). To avoid reverse engineering through optical analysis, the designers of RFID tags can embed non-functional logic to misguide the attackers, re-position the internal hardware to make the analysis more difficult, or implement certain key functionality in software instead of hardware.