• No se han encontrado resultados

GeoAcademy: algorithm and platform for the automatic detection and location of geographic coordinates in scientific articles

N/A
N/A
Protected

Academic year: 2022

Share "GeoAcademy: algorithm and platform for the automatic detection and location of geographic coordinates in scientific articles"

Copied!
5
0
0

Texto completo

(1)

aiming to combine the qualitative analysis and quantitative analysis to improve the integrity and credibility of scientific evaluation.

The preliminary research will continue improving the hierarchy system by interviewing other researchers from outside of LIS area, government stakeholders, and public if necessary. The following research will focus on the quantitative measurement. Our current plan to decompose the Level 3 analysis is to continuously ask opinions on breakdown each requirement one by one with quantitative measures which we have data access or acquirable data. Sketchily, we will organize another round of interviews with questions, such as “How to determine the number of other disciplines that have discussed the research theory”, “Can we transit the language property of the article into measurable indicators”, “How to obtain the number of articles published in different approaches”. We will gather all the actionable metrics or approaches. Last not the least, we will present the evaluation results as a way outside of box. Instead of giving weights to each indicator and resulting in another ranking or index, we will illustrate the results into the article context so that the readers would clearly have the image of the value and the impact of this article.

Acknowledgements

This research was supported by the Soft Science Key Project of Shanghai Science and Technology Innovation Action Plan 2020 (Grant No. 20692192500) and the open fund of ISTIC-CLARIVATE Joint Laboratory for Scientometrics.

References

Bana e Costa, C. A., & Oliveira, M. D. (2012). A multicriteria decision analysis model for faculty evaluation. Omega, 40(4), 424–436.

Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. In Journal of Documentation (Vol. 64, Issue 1).

Bornmann, L., & Marx, W. (2012). The Anna Karenina principle: A way of thinking about success in science. Journal of the American Society for Information Science and Technology, 63(10), 2037–

2051.

Ellegaard, O., & Wallin, J. A. (2015). The bibliometric analysis of scholarly production: How great is the impact? Scientometrics, 105(3), 1809–1831.

Evangelopoulos, N., Zhang, X., & Prybutok, V. R. (2012). Latent semantic analysis: Five methodological recommendations. European Journal of Information Systems, 21(1), 70–86.

Fiala, J., Mareš, J. J., & Šesták, J. (2017). Reflections on how to evaluate the professional value of scientific papers and their corresponding citations. Scientometrics, 112(1), 697–709.

Grainger, D. W. (2007). Peer review as professional responsibility: A quality control system only as good as the participants. Biomaterials, 28(34), 5199–5203.

Habib, R., & Afzal, M. T. (2019). Sections-based bibliographic coupling for research paper recommendation. Scientometrics, 119(2), 643–656.

Kangas, A., & Hujala, T. (2015). Challenges in publishing: Producing, assuring and communicating quality. Silva Fennica, 49(4), 1–6.

Kousha, K., & Thelwall, M. (2011). Assessing the citation impact of book-based disciplines: The role of Google Books, Google Scholar and Scopus. Proceedings of ISSI 2011 - 13th Conference of the International Society for Scientometrics and Informetrics, 1, 361–372.

Lund, B. D. (2019). The citation impact of information behavior theories in scholarly literature.

Library and Information Science Research, 41(4), 100981.

Prabhakaran, T., Lathabai, H. H., George, S., & Changat, M. (2018). Towards prediction of paradigm shifts from scientific literature. In Scientometrics (Vol. 117, Issue 3).

Zeng, C. J., Qi, E. P., Li, S. S., Stanley, H. E., & Ye, F. Y. (2017). Statistical characteristics of breakthrough discoveries in science using the metaphor of black and white swans. Physica A:

Statistical Mechanics and Its Applications, 487, 40–46.

GeoAcademy: algorithm and platform for the automatic detection and location of geographic coordinates in scientific articles

Jesús Cascón-Katchadourian1, Francisco Carranza-García2, Carlos Rodríguez-Domínguez3 and Daniel Torres-Salinas4

1cascon@ugr.es

University of Granada, Faculty of communication and documentation, Dept of Information and Comunication, 18071 Granada (España)

2carranzafr@ugr.es

University of Granada, ETSIIT, Dept of Software Engineering, 18014 Granada (España)

3carlosrodriguez@ugr.es

University of Granada, ETSIIT, Dept of Software Engineering, 18014 Granada (España)

4torressalinas@ugr.es

University of Granada, Faculty of communication and documentation, Dept of Information and Comunication, 18071 Granada (España)

Abstract

This document describes the GeoAcademy project, whose main objective is to automatically geolocate scientific articles, downloaded from general scientific databases such as Scopus or WoS. This geolocation is carried out on the content of the document, either through the capture of possible geographical coordinates that the document has, or toponyms that may appear in the document through an algorithm created for this purpose. In the methodology we explain the steps that have been taken in this project to create a sample database with articles that deal with Sierra Nevada (Spain) and the creation and design of the algorithm. The results show the technical data of the application of the algorithm on the database and its success rate, as well as a description of the platform created to graphically display the geolocated documents on a web map. Finally, in the discussion, we define the difficulties encountered, the possible bibliometric applications and its usefulness as a tool for viewing and retrieving information

Introduction

Geolocation is the possibility of locating information in specific geographic spaces to treat and process it (Ramos Vacca & Bucheli Guerrero, 2015). Technological advances in restitution by aerial photogrammetry and digitized cartography have contributed to this (Cortés-José, 2001), as have new procedures for cartographic writing and map printing, the appearance around 2005 of Google Maps, Bing Maps, OpenStreetMap and numerous programs to create maps such as OpenLayer, Leaflet, CartoDB or MapTiler (Cascón-Katchadourian & Ruiz Rodríguez, 2016).

In the field of scientific evaluation and bibliometrics, geolocation techniques have been commonly used to locate scientific articles based on the origin of its authors (Catini et al., 2015).

However, few studies have been devoted to analyzing a set of scientific publications based on their geographical coordinates and geolocating them on a map.

We would like to highlight here two current initiatives due to their importance and because they include several institutions behind their prototyping. One of them is GEOUP4, this "web portal shows on an interactive map the geolocation of the academic items of the repositories of the UPC, UPCT, UPM and UPV polytechnic universities grouped in the UP4 Association"

(http://geo.up4. is/). The other project is JournalMap (https://www.journalmap.org/) a cooperative project between the USDA-ARS Jornada Experimental Range in Las Cruces, NM and the Idaho Chapter of The Nature Conservancy. This is one of the components of another tool called Landscape Toolbox. It is a project that geolocates scientific production based on the geographical location where the study is carried out to observe which areas have been studied and in which there are gaps. The present paper links with these two projects. Our study is

(2)

original and useful since it aims to geolocate the scientific articles in an automatic way through algorithms created for this purpose and not in a manual or semi-automatic way.

More specifically, there are three main objectives for this project, firstly (1) to develop an algorithm that allows the coordinates to be extracted from a collection of documents to identify exactly which places these studies deal with. The second objective (2) is that once the locations have been identified by the algorithm, they will be displayed on a map through an online platform. In the third objective (3), the platform will integrate a layer of bibliometric and/or altmetric information that will allow one to know the volume of production according to coordinates as well as different data on the scientific and social impact. It should be mentioned that in this paper will be presented only the results of objectives 1 and 2 applied to a set of scientific works that study the Sierra Nevada mountain range (Granada, Spain)

Material and methods

In order to create a collection of documentary information about Sierra Nevada, a search was done in the Scopus database. The Scopus database has been chosen for the facilities it offers for exporting full-text documents, thanks to Scopus Document Download Manager. With this type of search, we wanted to find scientific articles that dealt with the Sierra Nevada mountain range located in the province of Granada (Spain). As there were other mountain ranges with the same name in other parts of the world, the search for the term Spain or Granada was added.

This search found 623 articles. The second step was the processing of the information for storage. After the pertinent manual verification work (the geolocation of the documents is automatic, not the selection of the sample), there are 447 documents with a complete associated pdf (not only the abstract or title) and which are openly accessible, the other 176 (623-447) records do not have an associated pdf, are incomplete or are not an open publication, of which 424 are about the Spanish Sierra Nevada and not about the Sierra Nevada of California or Peru.

The third step was the identification of the coordinates. Although there are many types of coordinate systems, the most common that this project has found in the sample are the following types and subtypes of coordinates: the geographic coordinate system and the coordinate system UTM (Universal Transverse Mercator). The geographic coordinate system is subdivided into sexagesimal (degrees, minutes and seconds) and decimal (degrees and decimals). The UTM coordinate has multiple variants in that the authors express them in different ways, with or without the letters of the cardinal points, specifying the use or not (in our case it is 30S). Finally, in environmental studies, they usually use a subdivision of the use 30S, which is used in army maps and which is reflected in articles with VG, VF, WG and WF. Finally, our database contains 424 bibliographic references linked to their corresponding PDFs. This database has been used for the design and training of the algorithm.

For the geolocation of the contributions, we have developed a text mining algorithm based on the extraction of information through regular expressions and toponyms -keywords (see Figure 1). To carry out this automatic geolocation process, we design an algorithm based on 4 stages:

1. Preprocessing: In this stage, the contributions of the sample are processed in PDF format, converting the text of each one to a processable format (plain text), normalized and without all unnecessary information. For the conversion of PDF files to plain text, we have used pdftotext which is an open-source command-line utility.

2. Extraction: Search and extraction of geographical references in the text using two methods, (i) regular expressions for the search of geographical coordinates in their different formats (decimal, sexagesimal and UTM) and (ii) search of the frequency of appearance of place names.

The database of toponyms for this study has been developed by combining all sources of public and georeferenced information of the Junta de Andalucía such as NGA and ITACA of the Andalusian Institute of Statistics and Cartography.

3. Transformation: In order to georeference the results obtained on an interactive map, all the results are processed to unify them in decimal format (latitude, longitude). To convert between the different geographic representations, we have used Proj4js1 which is a library to transform point coordinates from one coordinate system to another, including datum transformations.

4. Analysis: Finally, once we have all the information that has been extracted in the same format, a small analysis is carried out to decide, through heuristic behavior, if it is possible to geolocate the contribution. For this, we have mainly followed three criteria: (i) if geographic coordinates and place names have been found, we combine this information to select the coordinate that is textually closest to the most frequent place name; (ii) if only geographic coordinates have been identified, it is geolocated within the most referenced area; and (iii) if only toponyms have been recognized, we geolocate it in the one with the highest frequency as long as it exceeds a certain threshold (this threshold is calculated as the simple mean of the frequencies of the toponyms in the entire sample of this study)

Figure 1. Algorithm workflow.

Results

Table 1 shows a summary of the results of the algorithm (figure 1) of automatic detection of geolocations being applied to the collection of 424 scientific articles on Sierra Nevada. In total, the algorithm has been able to geolocate 157 articles with coordinates, 37% of the total. One of the tools used to increase the number of geolocated articles and elements has been the use of place names that have allowed us to geolocate 5025 place names of 189 different articles. There are 2.6 place names on average per scientific article. Finally, the number of works that have been geolocated using the algorithm is 346, that is to say 81.6% of the original document collection has been located, either by coordinates or toponyms. It should be mentioned that one of the fundamental aspects to improve the success rate of the algorithm has been the use of place names.

(3)

original and useful since it aims to geolocate the scientific articles in an automatic way through algorithms created for this purpose and not in a manual or semi-automatic way.

More specifically, there are three main objectives for this project, firstly (1) to develop an algorithm that allows the coordinates to be extracted from a collection of documents to identify exactly which places these studies deal with. The second objective (2) is that once the locations have been identified by the algorithm, they will be displayed on a map through an online platform. In the third objective (3), the platform will integrate a layer of bibliometric and/or altmetric information that will allow one to know the volume of production according to coordinates as well as different data on the scientific and social impact. It should be mentioned that in this paper will be presented only the results of objectives 1 and 2 applied to a set of scientific works that study the Sierra Nevada mountain range (Granada, Spain)

Material and methods

In order to create a collection of documentary information about Sierra Nevada, a search was done in the Scopus database. The Scopus database has been chosen for the facilities it offers for exporting full-text documents, thanks to Scopus Document Download Manager. With this type of search, we wanted to find scientific articles that dealt with the Sierra Nevada mountain range located in the province of Granada (Spain). As there were other mountain ranges with the same name in other parts of the world, the search for the term Spain or Granada was added.

This search found 623 articles. The second step was the processing of the information for storage. After the pertinent manual verification work (the geolocation of the documents is automatic, not the selection of the sample), there are 447 documents with a complete associated pdf (not only the abstract or title) and which are openly accessible, the other 176 (623-447) records do not have an associated pdf, are incomplete or are not an open publication, of which 424 are about the Spanish Sierra Nevada and not about the Sierra Nevada of California or Peru.

The third step was the identification of the coordinates. Although there are many types of coordinate systems, the most common that this project has found in the sample are the following types and subtypes of coordinates: the geographic coordinate system and the coordinate system UTM (Universal Transverse Mercator). The geographic coordinate system is subdivided into sexagesimal (degrees, minutes and seconds) and decimal (degrees and decimals). The UTM coordinate has multiple variants in that the authors express them in different ways, with or without the letters of the cardinal points, specifying the use or not (in our case it is 30S). Finally, in environmental studies, they usually use a subdivision of the use 30S, which is used in army maps and which is reflected in articles with VG, VF, WG and WF. Finally, our database contains 424 bibliographic references linked to their corresponding PDFs. This database has been used for the design and training of the algorithm.

For the geolocation of the contributions, we have developed a text mining algorithm based on the extraction of information through regular expressions and toponyms -keywords (see Figure 1). To carry out this automatic geolocation process, we design an algorithm based on 4 stages:

1. Preprocessing: In this stage, the contributions of the sample are processed in PDF format, converting the text of each one to a processable format (plain text), normalized and without all unnecessary information. For the conversion of PDF files to plain text, we have used pdftotext which is an open-source command-line utility.

2. Extraction: Search and extraction of geographical references in the text using two methods, (i) regular expressions for the search of geographical coordinates in their different formats (decimal, sexagesimal and UTM) and (ii) search of the frequency of appearance of place names.

The database of toponyms for this study has been developed by combining all sources of public and georeferenced information of the Junta de Andalucía such as NGA and ITACA of the Andalusian Institute of Statistics and Cartography.

3. Transformation: In order to georeference the results obtained on an interactive map, all the results are processed to unify them in decimal format (latitude, longitude). To convert between the different geographic representations, we have used Proj4js1 which is a library to transform point coordinates from one coordinate system to another, including datum transformations.

4. Analysis: Finally, once we have all the information that has been extracted in the same format, a small analysis is carried out to decide, through heuristic behavior, if it is possible to geolocate the contribution. For this, we have mainly followed three criteria: (i) if geographic coordinates and place names have been found, we combine this information to select the coordinate that is textually closest to the most frequent place name; (ii) if only geographic coordinates have been identified, it is geolocated within the most referenced area; and (iii) if only toponyms have been recognized, we geolocate it in the one with the highest frequency as long as it exceeds a certain threshold (this threshold is calculated as the simple mean of the frequencies of the toponyms in the entire sample of this study)

Figure 1. Algorithm workflow.

Results

Table 1 shows a summary of the results of the algorithm (figure 1) of automatic detection of geolocations being applied to the collection of 424 scientific articles on Sierra Nevada. In total, the algorithm has been able to geolocate 157 articles with coordinates, 37% of the total. One of the tools used to increase the number of geolocated articles and elements has been the use of place names that have allowed us to geolocate 5025 place names of 189 different articles. There are 2.6 place names on average per scientific article. Finally, the number of works that have been geolocated using the algorithm is 346, that is to say 81.6% of the original document collection has been located, either by coordinates or toponyms. It should be mentioned that one of the fundamental aspects to improve the success rate of the algorithm has been the use of place names.

(4)

Table 1. Indicators and general statistics in the algorithm training process A)‡‡”ƒŽ‹†‹…ƒ–‘”•”‡Žƒ–‡†–‘‰‡‘Ž‘…ƒ–‹‘„›…‘‘”†‹ƒ–‡•

A1. Number of articles analysed in the study: 424 A2. Number of articles containing geographical coordinates (157 + 43 (images) + 5

(UTM)) 205 A.3 Number of articles containing geographical coordinates identified by the

algorithm: 157 A.4. Percentage of articles geolocated through geographical coordinates (A3/A1) 37.03%

B) Indicators related to geolocation by place names

B.1. Number of place names identified by the algorithm 5025 B.2 Number of articles geolocated by place names 189 B3. . Percentage of articles geolocated by place names (B2/A1) 44.57%

C) Findings

C.1 Number of geolocalisations achieved (coordenates + place names) (A.3+B.2) 346 C.2. Percentage of articles geolocated from the the total number of articles (C1/A1) 81.6%

Once objective 1 had been achieved, focus moved to the creation of an application to view scientific articles on a digital map based on the different coordinates they contain. The portal that has been developed currently contains the collection of documents on Sierra Nevada that has been analysed in this paper. The portal is operational at the following web address in beta format: https://geoacademy.everyware.es/.

Figure 2. GeoAcademy application showing geolocated locations for the Sierra Nevada collection of articles.

As we can see in Figure 2, the geolocations detected by the algorithm are distributed, by clicking on each one we see the reference information of the paper and a link to view it. At the bottom you can filter by type of document you are looking for (article, conference, thesis, etc.) or you can search by keyword. The different tabs give us access to a traditional search engine with filters and a search radius, as well as a tab where the project and its members are described, as well as a contact form. Likewise, each scientific work that has been processed includes both the metadata of the databases (in this case Scopus) and all the metadata generated automatically by

the algorithm. In our portal, the coordinates in decimal format as well as a complete list of the identified place names are added to the bibliographic description.

Discussion

This study contains a number of limitations that we proceed to list and which have mainly to do with processing. The first problem relates to the processing of PDF documents. We found several that did not have an associated pdf, for others the pdf requires payment, or only the title or the title and abstract are available. This limits the number of results you can cover. Secondly, the coordinates have some formal standardization, whereas in practice each field of knowledge has different guidelines for writing the coordinates, with greater variability with the UTMs that the algorithm has overcome without problems. Another problem is that several studies show the coordinates in image form, that makes them more difficult to extract, although in the future through OCR technology they could be captured. At the linguistic level, words with multiple means in the toponymy can be problematic, which can give both false positives and false negatives.

The success rate of the Geoacademy algorithm on all documents is 81.6%, a high percentage for this type of study. We will continue to train the algorithm with much larger documentary collections related to other geographical features. Likewise, it will be applied in other contexts, such as archaeological where most of the articles include precise geographic coordinates. It could also be applied to digital humanities, since interesting projects are being carried out where historical works are geolocated through the place names that appear in them.

The third objective of this work is to provide the platform with a layer of information capable of representing information of a bibliometric and altmetric nature, that is, of scientific and social impact. In this sense, future development implies providing the GeoAcademy platform with the following functionalities 1) Filter locations based on indicators (Impact Factor, Number of citations, Altmetric Attention Score ...) 2) Use of geoposition markers differentiated according to values of the indicators. Likewise, the platform will present a bibliometric summary of the different coordinates and locations, offering bibliometric information at a level not currently seen. Future developments could include its inclusion with an industry tool such as bibliometrix.

It would also be very interesting if users of large scientific databases such as Scopus or Web of Science, having performed a search with a list of results, could see said list geographically on a map, applying our algorithm for a better user experience. That is to say, to integrate our project in their platforms. Finally, numerous studies do not work with geographical points, we are currently working on how to show these studies on our platform

References

Cascón-Katchadourian, J., & Ruiz Rodríguez, A. Á. (2016). Descripción y valoración del software MapTiler: Del mapa escaneado a la capa interactiva publicada en la web.

Catini, R., Karamshuk, D., Penner, O., & Riccaboni, M. (2015). Identifying geographic clusters: A network analytic approach. Research policy, 44(9), 1749-1762.

Cortés-José, J. (2001). El documento cartográfico. En J. Jiménez Pelayo, J. Monteagudo López- Menchero, & F. J. Bonachera Cano. La documentación cartográfica : Tratamiento, gestión y uso (pp. 37-113). Huelva: Universidad de Huelva.

Ramos Vacca, I. D., & Bucheli Guerrero, V. A.. (2015). Automatic geolocation of the scientific knowledge: Geolocarti. Paper presented at the - 2015 10th Computing Colombian Conference (10CCC), pp. 416-424. doi:10.1109/ColumbianCC.2015.7333454

(5)

Table 1. Indicators and general statistics in the algorithm training process A)‡‡”ƒŽ‹†‹…ƒ–‘”•”‡Žƒ–‡†–‘‰‡‘Ž‘…ƒ–‹‘„›…‘‘”†‹ƒ–‡•

A1. Number of articles analysed in the study: 424 A2. Number of articles containing geographical coordinates (157 + 43 (images) + 5

(UTM)) 205 A.3 Number of articles containing geographical coordinates identified by the

algorithm: 157 A.4. Percentage of articles geolocated through geographical coordinates (A3/A1) 37.03%

B) Indicators related to geolocation by place names

B.1. Number of place names identified by the algorithm 5025 B.2 Number of articles geolocated by place names 189 B3. . Percentage of articles geolocated by place names (B2/A1) 44.57%

C) Findings

C.1 Number of geolocalisations achieved (coordenates + place names) (A.3+B.2) 346 C.2. Percentage of articles geolocated from the the total number of articles (C1/A1) 81.6%

Once objective 1 had been achieved, focus moved to the creation of an application to view scientific articles on a digital map based on the different coordinates they contain. The portal that has been developed currently contains the collection of documents on Sierra Nevada that has been analysed in this paper. The portal is operational at the following web address in beta format: https://geoacademy.everyware.es/.

Figure 2. GeoAcademy application showing geolocated locations for the Sierra Nevada collection of articles.

As we can see in Figure 2, the geolocations detected by the algorithm are distributed, by clicking on each one we see the reference information of the paper and a link to view it. At the bottom you can filter by type of document you are looking for (article, conference, thesis, etc.) or you can search by keyword. The different tabs give us access to a traditional search engine with filters and a search radius, as well as a tab where the project and its members are described, as well as a contact form. Likewise, each scientific work that has been processed includes both the metadata of the databases (in this case Scopus) and all the metadata generated automatically by

the algorithm. In our portal, the coordinates in decimal format as well as a complete list of the identified place names are added to the bibliographic description.

Discussion

This study contains a number of limitations that we proceed to list and which have mainly to do with processing. The first problem relates to the processing of PDF documents. We found several that did not have an associated pdf, for others the pdf requires payment, or only the title or the title and abstract are available. This limits the number of results you can cover. Secondly, the coordinates have some formal standardization, whereas in practice each field of knowledge has different guidelines for writing the coordinates, with greater variability with the UTMs that the algorithm has overcome without problems. Another problem is that several studies show the coordinates in image form, that makes them more difficult to extract, although in the future through OCR technology they could be captured. At the linguistic level, words with multiple means in the toponymy can be problematic, which can give both false positives and false negatives.

The success rate of the Geoacademy algorithm on all documents is 81.6%, a high percentage for this type of study. We will continue to train the algorithm with much larger documentary collections related to other geographical features. Likewise, it will be applied in other contexts, such as archaeological where most of the articles include precise geographic coordinates. It could also be applied to digital humanities, since interesting projects are being carried out where historical works are geolocated through the place names that appear in them.

The third objective of this work is to provide the platform with a layer of information capable of representing information of a bibliometric and altmetric nature, that is, of scientific and social impact. In this sense, future development implies providing the GeoAcademy platform with the following functionalities 1) Filter locations based on indicators (Impact Factor, Number of citations, Altmetric Attention Score ...) 2) Use of geoposition markers differentiated according to values of the indicators. Likewise, the platform will present a bibliometric summary of the different coordinates and locations, offering bibliometric information at a level not currently seen. Future developments could include its inclusion with an industry tool such as bibliometrix.

It would also be very interesting if users of large scientific databases such as Scopus or Web of Science, having performed a search with a list of results, could see said list geographically on a map, applying our algorithm for a better user experience. That is to say, to integrate our project in their platforms. Finally, numerous studies do not work with geographical points, we are currently working on how to show these studies on our platform

References

Cascón-Katchadourian, J., & Ruiz Rodríguez, A. Á. (2016). Descripción y valoración del software MapTiler: Del mapa escaneado a la capa interactiva publicada en la web.

Catini, R., Karamshuk, D., Penner, O., & Riccaboni, M. (2015). Identifying geographic clusters: A network analytic approach. Research policy, 44(9), 1749-1762.

Cortés-José, J. (2001). El documento cartográfico. En J. Jiménez Pelayo, J. Monteagudo López- Menchero, & F. J. Bonachera Cano. La documentación cartográfica : Tratamiento, gestión y uso (pp. 37-113). Huelva: Universidad de Huelva.

Ramos Vacca, I. D., & Bucheli Guerrero, V. A.. (2015). Automatic geolocation of the scientific knowledge: Geolocarti. Paper presented at the - 2015 10th Computing Colombian Conference (10CCC), pp. 416-424. doi:10.1109/ColumbianCC.2015.7333454

Referencias

Documento similar

A similar problem – which is closely related to the original sum-product conjecture – is to bound the number of sums and ratios along the edges of a graph2. We give upper and

Government policy varies between nations and this guidance sets out the need for balanced decision-making about ways of working, and the ongoing safety considerations

No obstante, como esta enfermedad afecta a cada persona de manera diferente, no todas las opciones de cuidado y tratamiento pueden ser apropiadas para cada individuo.. La forma

We have created this abstract to give non-members access to the country and city rankings — by number of meetings in 2014 and by estimated total number of participants in 2014 —

The objective was to analyse the processing and data management requirements of simulation applications which support research in five scientific areas: astrophysics,

Considering the challenge of detecting and evaluating the presence of marine wildlife, we present a full processing pipeline for LiDAR data, that includes water turbidity

For this reason, firstly, the different processing tasks involved in the automatic people detection in video sequences have been defined, then a proper classification of the state

For both the free liquid slabs and the adsorbed films and despite of the low vapor density, we have found that the fluctuations associated with the evaporation of particles to the