information and data

Top PDF information and data:

ANSI/NISO Z39.7 : Information services and use: metrics & statistics for libraries and information providers data dictionary

ANSI/NISO Z39.7 : Information services and use: metrics & statistics for libraries and information providers data dictionary

The 2004 revision differed significantly from its predecessors in its approach. It took the path of developing a web-based utility for identifying standard definitions, methods, and practices relevant to library statistics activities in the United States. Like the previous editions of Z39.7 the aim of the standard remains: to assist librarians and researchers [now defined as the information community] by indicating and defining useful quantifiable information to measure the resources and performance of libraries and to provide a body of valid and comparable data on American libraries. The 2004 edition also changed the name from Library Statistics to its current title, Information Services and Use: Metrics and Statistics for Libraries and Information Providers – Data Dictionary .
Mostrar más

67 Lee mas

Resolution X. 14 A Framework for Ramsar data and information needs

Resolution X. 14 A Framework for Ramsar data and information needs

2. ALSO AWARE of the Ramsar Sites Information Service (RSIS) developed and managed for the Convention by Wetlands International under contractual arrangements with the Ramsar Secretariat to support Contracting Parties in their implementation of wetland conservation and wise use, especially concerning Wetlands of International Importance; and FURTHER AWARE of other tools and resources available from International Organisation Partners and other organisations that contribute to supporting Ramsar data and information needs;

27 Lee mas

TítuloAlgorithms and compressed data structures for information retrieval

TítuloAlgorithms and compressed data structures for information retrieval

We note that all the techniques proposed in this thesis are conceived to oper- ate in main memory, mainly due to the random access pattern presented in all of them. This fact can be, at first, regarded as a serious restriction since we want to index huge volumes of data. However, recent hardware developments, such as the availability of 64 − bit architectures and the increase of the usage of cluster envi- ronments, have led to a scenario where large collections can be entirely addressed on main memory. In this way, there has been a lot of recent research in efficient document retrieval in main memory [SC07, CM07] where indexes are stored com- pletely in main memory. Therefore, the cost of random reads, which is one of the greatest bottleneck of traditional information systems, is minimized. Hence, it is important to focus on another key aspect of the efficiency of information retrieval systems: the huge volume of data. Then, the goal is to process less amount of data, that is, to read fewer bytes. In this scenario, compact data structures achieve great importance and they were the main objective of this thesis.
Mostrar más

306 Lee mas

Hierarchical information representation and efficient classification of gene expression microarray data

Hierarchical information representation and efficient classification of gene expression microarray data

endpoints A,C to I endpoints of [112], available at http://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE16716. A detailed explanation of the endpoint composition is included in Table 4.1. These data have been chosen because they are highly reliable, selected after a quality control process in order to provide a common test ground and because for each endpoint both a training set and an independent validation set are provided [112]. Furthermore, many dierent laboratories have tested their algorithm on the same datasets with the same evaluation protocol (i.e. train the classiers on the training set with performance assessment on the validation dataset) and published their nal outcome [112, 100, 83] thus an accurate benchmark can be performed to understand how well does a proposed algorithm perform with respect to a large number of state of the art alternatives. Results are compared in terms of Matthews Correlation Coecient (MCC) [89] since, as stated in [112] it is informative when the distribution of the two classes is highly skewed, it is simple to calculate and available for all models with which the proposed method has been compared to. It is dened by:
Mostrar más

147 Lee mas

Removing exogenous information using pedigree data

Removing exogenous information using pedigree data

In each generation several variables were calculated to evaluate the efficiency of the different strategies: (1) non- exogenous founder representation, calculated from geneal- ogy, (2) [r]

9 Lee mas

Descargar
			
			
				Descargar PDF

Descargar Descargar PDF

The purpose of this article is to analyze, through bibliographic research, with a qualitative and quantitative approach, the contribution of historical documentation of academic libraries in the strategic management of information. It was made a bibliographic review on the data bases Library and Information Science Abstracts, Information Science & Technology Abstracts, Scopus, Web of Science, Base de dados Referenciais em Ciência da Informação of the Universidade Federal do Paraná and Biblioteca Digital Brasileira de Teses e Dissertações of the Instituto Brasileiro de Informação em Ciência e Tecnologia. The search strategy used required the simultaneous occurrence of the terms “memory” or “history”, “academic library” andinformation management”, such as descriptors or keywords. There were retrieved 16 articles that were categorized according to their scope. Of these, none has highlighted the role of historical documentation of academic libraries in the strategic management of information.
Mostrar más

10 Lee mas

Are the MCVL tax data useful? Ideas for mining

Are the MCVL tax data useful? Ideas for mining

The main objective of this study was to present the possibilities of in-depth mining of the information contained in the CSWl “tax file” and the personal details of Social Securi- ty contributors for the period 2004-2009. Using data from the tax module has advantages and disadvantages. Its advantages over other statistical sources are the following. First, a basic aspect is the availability of data on the income for individuals that can be linked between several waves (longitudinal data) and personal information (personal files) and work infor- mation (contribution files) with regard to different job categories according to their types of income: salaried workers, pensioners, self-employed and recipients of unemployment bene- fits. This income information is not available in the lFS (although it has provided wage dis- tribution data expressed in deciles based on Form 190 since 2010) and, although the Person- al Income Tax Filers Panel contains tax data, it does not contain detailed labour variables.
Mostrar más

38 Lee mas

Chapter I Overview of Statistics ppt

Chapter I Overview of Statistics ppt

This course prepares students how to obtaining data and transform it into information to describe, synthesizing, analyzing, and interpreting information by using table, graphs and summary statistic, also analyze Educational and Psychology data, examining the relationships between variables, and use statistical tools necessary to perform data analysis and enable the make decisions under conditions of uncertainty considering estimation errors when performing their generalizations.

18 Lee mas

A GPS analysis for urban freight distribution

A GPS analysis for urban freight distribution

This permitted the categorization and analysis of urban freight mobility based on qualitative (vehicle observation and interviews) and quantitative (GPS data) information.. This was do[r]

13 Lee mas

Interaction design of an application for IoT data of workplaces

Interaction design of an application for IoT data of workplaces

6. Walking through the diagram: after completing the groups in the affinity diagram, the team talked about all concepts discovered and made conclusions on the data gathered from the interviews. This part of the session was audio recorded, as valuable information was mentioned about the decisions for the next steps in the product redesign process. For further reference, the resulting diagram was added to the annex (section 8.2). Each participant from the interviews has been assigned a colour for their cards and the legend is visible in the affinity diagram. In addition, as it can be observed, there were some groups that were easier to identify based on the cards created: “Dashboard”, “Menu”, “Profile” etc. However, there were some interesting key points from the interviews that were more difficult to categorize and they were labelled as “Independent Ideas”. In addition, some cards could belong to more than one group and the solution was to place them at the borders of those groups (e.g. the cards between the “Measures” and “Raw Data” labels). Lastly, it is important to mention here that the labels with the same colour were initially bigger groups that needed to be divided into smaller ones. This colour code has been chosen to show that the groups are related and present similar concepts.
Mostrar más

101 Lee mas

Relationship between shoulder pain and weight of shoulder bags in young women

Relationship between shoulder pain and weight of shoulder bags in young women

ABSTRACT The present study aimed to assess the relationship between shoulder pain and weight of shoulder bags in young women. Cross-sectional study conducted with 316 women aged 18-35 years from February 2013 to July 2014. A questionnaire was used to collect demographic data and information on physical activity, sleeping habits, presence of pain and its characteristics, use of bags, and percentage of bag weight–body weight ratio (%bagweight). Pearson’s chi-squared test and Mann-Whitney test were used to check for associations between the dependent variable (presence of pain) and the independent variables, with a significance level of 5% (p<0.05). In all, 195 (61.7%) women complained of shoulder pain. These women carried heavier shoulder bags (p=0.01), weighing circa 4.02% of their body weight (p=0.050), and the pain was proportional to a higher bag weight (p=0.023) compared to the painless group. Lack of physical activity and inadequate sleep position influenced the occurrence of shoulder pain (p=0.008 and p=0.017, respectively). The weight of the shoulder bag represented a risk factor for the onset of shoulder pain and women should not carry bags weighing more than 4% of their body weight.
Mostrar más

8 Lee mas

An approach to the measurement of intangible assets in dot com

An approach to the measurement of intangible assets in dot com

PCA is a standard technique in multivariate statistical analysis. It is a data reduction technique. When many variables are associated with a particular entity, such as a dot com company, it is suspected that some of them will be measuring the same characteristic of the company. It may be that several variables may, in fact, be indicators of a characteristic of the company that cannot be measured. How many independent characteristics are necessary to describe a company, what variables are associated with this characteristic, and up to what point a particular variable contributes to the explanation of the characteristic, are inferred from the solution of the PCA exercise. For an introduction to PCA see Chatfield and Collins (1980). There is much in common between MDS and PCA, but, in this particular case, MDS has a crucial advantage over PCA: PCA plots companies only if full information is available for the company, while MDS is robust to missing data. Thus, if maps had been created with PCA, they would have contained only 35 points, since the value of at least one variable was missing for 5 firms: ALOY, AMEN, CNET, EGGS, and TVLY. Three of the 34 ratios could not be calculated for these five firms: V11, V20, and V29 in the case of ALOY, AMEN, CNET, and EGGS; V9, V18, and V27 for TVLY. This loss of information would have required deleting the firms from the data set if other techniques had been used. In our case MDS made it possible to keep these firms in the analysis.
Mostrar más

32 Lee mas

An Integrated Data Model and Web Protocol for Arbitrarily Structured Information

An Integrated Data Model and Web Protocol for Arbitrarily Structured Information

• (D2, S1): These models feature regular and flexible schema structure for data items with values on the textual domain. Examples of these models are easy to find in mod- ern digital library systems. Digital libraries services typically deal with digital objects (i.e. at the conceptual-level) that feature streams of information—whether binary, as in document files, or textual, such as document paragraphs (Gonçalves et al., 2004). Although digital library services are built on top of the document collection model characteristic of information retrieval, structure regularity is introduced in terms of one or more metadata schemes associated with digital objects. This has lead to an ongo- ing debate within the digital library community surrounding the efficacy of metadata searching vs. full-text searching (Mischo, 2005). A model for integrating retrieval of structured and unstructured queries, such as RELTEX calculus, could provide a com- mon ground for both metadata-based searching (whether over the structured or textual domain) and full-text searching (whether of whole documents, or over particular docu- ment fields). Another example of a model with a rigid-flexible structure and data items over the textual domains is found in the model underlying Solr 26 . Solr is also a search application built on top of the document collection model. Solr supports index-specific schema definitions provided in XML files. Solr’s XML schema files detail which fields can be contained in documents along with the particulars of the indexing and querying functions associated to document fields. In Solr fields are always defined upfront (thus, introducing schema regularity) but can be specified to be optional in documents. In addition, Solr features “dynamic fields” by relating a pattern in (runtime defined) field names to indexing and querying functions. These dynamic fields can be considered equivalent to RELTEX’s extension fields, yet while RELTEX relates a single set of in- dexing and matching functions to all extension fields in the same table, Solr supports several indexing and matching functions within an index through a naming convention (e.g. all dynamic fields ending in “_type_A” use functions If I.A and M f I.A , while
Mostrar más

212 Lee mas

Ramsar COP10 DR 15 Draft Resolution X.15 Describing the ecological character of wetlands, and data needs and formats for core inventory: harmonized scientific and technical guidance

Ramsar COP10 DR 15 Draft Resolution X.15 Describing the ecological character of wetlands, and data needs and formats for core inventory: harmonized scientific and technical guidance

19. These analyses revealed a number of issues that have been taken into account in the development of the ecological character description field structure provided in Section 4 below. One of these is that some of these schemes did not include a field for recording information on wetland type(s) present (in terms of the Ramsar classification of wetland type), which has been added as an ecological character description field. Similarly, the “pressures, vulnerabilities and trends” field (in the Resolution VIII.6 core inventory fields) has been added in the ecological processes section of the description. In general, however, the content and structure of the ecological character description below has been kept as close as possible to the various existing inventory and ecological character schemes.
Mostrar más

17 Lee mas

NWC SAF/High Resolution Winds (HRW) as stand alone AMV calculation software

NWC SAF/High Resolution Winds (HRW) as stand alone AMV calculation software

All elements for the reading and processing of all needed data (including Satellite, NWP model data and Cloud information for the AMV height assignment using NWC SAF/Cloud Type and Cloud Top Temperature and Height products), for the running of all parts of the algorithm, and for the definition of the AMV output in several formats (BUFR, HDF5 or McIDAS MD files) are included in SAFNWC/MSG software package or at NWC SAF website. The user does not need then any additional elements to calculate and use the AMVs provided by HRW product.
Mostrar más

8 Lee mas

AERONET Web Data Access and Relational Database

AERONET Web Data Access and Relational Database

This presentation describes the design, capability, and delivery of data from the AERONET web site. In early 2004, the web site was redesigned to improve readability and content in addition to meeting NASA web site protocols. The AERONET data display interface provides aerosol optical thickness (AOT) and water vapor data including site information and related added-value products (e.g., MODIS imagery, back trajectory analyses). The download tool provides access to several AERONET products including various levels of AOT and retrievals of atmospheric optical properties. The web site also provides climatology maps and data tables of Level 2.0 AOT and 440-870nm Ángstrom parameter. The presentation further describes how the new AERONET relational database has been used in operational activities and provides examples that show how this database has been integrated into the existing web site. The AERONET relational database will also provide a mechanism to track instrument information from remote calibration facilities.
Mostrar más

29 Lee mas

D-SEISMIC :A VERY FLEXIBLE LOW COST -HARDWARESOFTWARE- SYSTEM FOR ACQUISITION, REAL TIME AND POST PROCESSING OF SEISMIC DATA OF ROSS SEA (ARTARTICA 2002 EXPEDITION)

D-SEISMIC :A VERY FLEXIBLE LOW COST -HARDWARESOFTWARE- SYSTEM FOR ACQUISITION, REAL TIME AND POST PROCESSING OF SEISMIC DATA OF ROSS SEA (ARTARTICA 2002 EXPEDITION)

Z = × ) and thus, are sufficiently different from one another so that they can be distinguished. Bearing this in mind, some experiments were carried out in 1975, by F.Giordano [1] using an analog processor with time varying gain system and an eco-signal amplitude to time converter while utilizing a marine ecograph . The signal reflected by the sea bed was processed by the analog processor and transmitted to the stylus of the ecograph and the length of the representative bar was correspondent to sea bed typology. These modern digital technologies [3] together with sophisticated and fast data processing, therefore allow the possibility of extracting more information from the reflected sea bed signals.
Mostrar más

7 Lee mas

PPT Chapter 01(1)

PPT Chapter 01(1)

• Umbrella term: capture, retrieval, storage, presenting, sharing, and use of biomedical information, data, and knowledge for providing care, solving problems, and making decisions.. [r]

18 Lee mas

Desarrollo de técnicas de aprendizaje automático y computación evolutiva multiobjetivo para la inferencia de redes de asociación entre vías biológicas

Desarrollo de técnicas de aprendizaje automático y computación evolutiva multiobjetivo para la inferencia de redes de asociación entre vías biológicas

En esta tesis se presentó como principal aporte un método bautizado como PET ( Crosstalk Pathway Inference by using Gene Expression Data Biclustering and Topological Information ) que [r]

158 Lee mas

Chapter I  Descriptive Statistics  2015 doc

Chapter I Descriptive Statistics 2015 doc

This chapter prepares students how to obtaining data and transform it into information to describe, synthesizing, analyzing, and interpreting information by using table, graphs and summary statistic, also analyze business data, examining the relationships between variables and making economic forecasts and to use statistical tools necessary to perform data analysis and enable the make decisions under conditions of uncertainty considering estimation errors when performing their generalizations.
Mostrar más

37 Lee mas

Show all 10000 documents...