Difference between revisions of "Blue Hackathon iMarine Data Challenges"
Andrea.manzi (Talk | contribs) (→gCUbe Search client) |
m (→Challenge #3) |
||
(69 intermediate revisions by 6 users not shown) | |||
Line 3: | Line 3: | ||
Enrich HTML web content with RDF annotation, and enable annotation-based document discovery | Enrich HTML web content with RDF annotation, and enable annotation-based document discovery | ||
=====Background===== | =====Background===== | ||
− | + | Most information resources in the "Blue" domain were created without the exploitation by advanced search and discovery mechanisms in mind. They thus lack the semantic richness that would improve their visibility, usefulness, and quality. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | One cost-effective opportunity to overcome this limitation may be the addition of RDFa to existing datasets. This can be achieved by a mechanism that extracts concepts from a html-text, aligns these with concepts from a semantic KB, and returns the uri's that can be attached to the source, either off-line, as header metadata, or in-line. | ||
− | + | Proving that such a mechanisms can effectively enrich a 'flat' resource with interpretable rdf will present evidence for data owners in the "Blue" domain that they can add value to their resources with limited costs with the help of semantechnicians. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | |||
− | |||
− | |||
− | |||
− | |||
=====Objectives===== | =====Objectives===== | ||
− | + | We ask the hackathon participants to find a technical solution to enrich the [http://www.fao.org/fishery/species/search/en factsheets] of the FIGIS portal with annotations in RDFa format. The annotation will consist at least the URIs of the entities referenced in the factsheet, and of set of relevant relations provided with the datasets. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | We ask the hackathon to: | ||
+ | #GOAL: Provide an RDFa client to | ||
+ | ## extract concepts from fact-sheets, e.g. accessing the fact-sheet content using the service provided [http://figisapps.fao.org/vrmf/samples/species/FS/ here] | ||
+ | ## identify uri's from several KB's, | ||
+ | ## create the RDF annotations, and | ||
+ | ## expose these RDF annotations. | ||
+ | #GOAL: Use the annotations produced at item one, as input to online search of factsheets (publication, GIS maps, images, statistical timeseries), to create enhanced discovery facility that complement the web page information content. | ||
+ | #GOAL: Retrieve a set of fact-sheets via online search services. | ||
+ | #GOAL: Write RDFa to these factsheets. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=====Challenges===== | =====Challenges===== | ||
TBD | TBD | ||
=====Datasets===== | =====Datasets===== | ||
− | + | * FAO Species | |
+ | ** [[#Aquatic_Species_Fact_Sheets | Fact Sheets]] | ||
+ | * KB's: | ||
+ | ** [[#TLO_based_SPARQL_endpoint | TLO]] | ||
+ | ** [[#FAO_FLOD | FLOD]] | ||
+ | |||
=====APIs===== | =====APIs===== | ||
− | + | See below. | |
− | ===Challenge # | + | ===Challenge #2=== |
Search Results presentation exploitation. | Search Results presentation exploitation. | ||
Line 76: | Line 47: | ||
TBD | TBD | ||
=====Datasets===== | =====Datasets===== | ||
− | + | [[#Ecoscope | Ecoscope]] | |
+ | |||
=====APIs===== | =====APIs===== | ||
− | |||
− | ===Challenge # | + | [[#gCube_Search_client | gCube Search client]] |
+ | |||
+ | ===Challenge #3=== | ||
Processing and Visualization of data sets | Processing and Visualization of data sets | ||
Line 95: | Line 68: | ||
TBD | TBD | ||
=====Datasets===== | =====Datasets===== | ||
− | [ | + | [[#iMarine_GeoNetwork | iMarine GeoNetwork]] |
=====APIs===== | =====APIs===== | ||
− | [ | + | [[#GeoNetwork_Client | iMarine GeoNetwork Client]] |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
==Datasets and APIs== | ==Datasets and APIs== | ||
Line 130: | Line 81: | ||
* http://www.ics.forth.gr/isl/TLObasedDataWarehouse | * http://www.ics.forth.gr/isl/TLObasedDataWarehouse | ||
=====Description===== | =====Description===== | ||
− | + | ||
+ | The description of the MarineTLO can be found here: | ||
+ | |||
+ | http://wiki.i-marine.eu/index.php/Top_Level_Ontology | ||
+ | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
(How can be used within a challenge) | (How can be used within a challenge) | ||
Line 136: | Line 91: | ||
====FAO FLOD ==== | ====FAO FLOD ==== | ||
* http://www.fao.org/figis/flod/ | * http://www.fao.org/figis/flod/ | ||
− | |||
− | |||
− | |||
− | |||
− | === | + | ===FLOD SPARQL endpoint === |
− | + | * http://www.fao.org/figis/flod/endpoint/flod (Jena Joseki) | |
− | * http:// | + | |
=====Description===== | =====Description===== | ||
− | ( | + | |
+ | The Fisheries Linked Open Data (FLOD) stems from a rising trend initiative known as Linked Open Data. It is dedicated to create a dense network of relationships among the entities of the Fishery domains, and to programmatically serve them to semantic and traditional application environments. | ||
+ | It started with the objective to identify and interlink equivalent codes from different code lists in use by FIGIS, in order to consolidate the information referenced by each different code, and then expanded to include external data source such as NAFO, EU, and ICCAT. | ||
+ | Currently the FLOD network includes entities and relationships from the the domains of Marine Species, Water Areas, Land Areas, Exclusive Economic Zones. It serves software applications in the domain of statistics, and GIS. | ||
+ | The FLOD content is exposed via either SPARQL endpoints (suitable for semantic applications), or via JAVA API to be embedded in consumers' application code. | ||
+ | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | ( | + | Query for entities of kind: |
+ | |||
+ | * Gear types | ||
+ | * Vessel types | ||
+ | * Marine species | ||
+ | * Fishing Areas | ||
+ | * Statistical countries (Flagstate) | ||
+ | * Regional Fisheries Bodies | ||
+ | |||
+ | ==== Ecoscope ==== | ||
+ | |||
+ | http://www.ecoscopebc.ird.fr | ||
+ | |||
+ | ===== Description ===== | ||
+ | Knowledge base on Exploited Marine Ecosystems, the repository gives access to a series of information related species, fishing vessels, agents, information resources ( images, databases, spatial information, publication and plots) | ||
+ | |||
+ | The access to the information can be trough SPARQL or Opensearch | ||
+ | |||
+ | SPARQL Endpoint: | ||
+ | |||
+ | * http://ecoscopebc.mpl.ird.fr/joseki/ecoscope.html | ||
+ | |||
+ | Opensearch description document | ||
+ | |||
+ | * http://d4science.web.cern.ch/d4science/OpenSearch/OpenSearchRSS.xml | ||
+ | |||
+ | ==== Genesi DEC ==== | ||
+ | |||
+ | http://www.genesi-dec.eu/ | ||
+ | |||
+ | ===== Description ===== | ||
+ | |||
+ | The Genesi DEC project established establish open data and services access, allowing European and worldwide Digital Earth Communities to seamlessly access, produce and share data, information, products and knowledge. This creates a multi-dimensional, multi-temporal, and multi-layer information facility of huge value in addressing global challenges such as biodiversity, climate change, pollution and economic development. | ||
+ | |||
+ | http://www.genesi-dec.eu/search/ | ||
+ | |||
+ | ===== Exeploitation Example ===== | ||
==== iMarine GeoNetwork ==== | ==== iMarine GeoNetwork ==== | ||
* http://geonetwork.d4science.org/geonetwork/ | * http://geonetwork.d4science.org/geonetwork/ | ||
=====Description===== | =====Description===== | ||
− | + | The iMarine Geonetowrk service is the entry point for tbe discovery and access to many type of Georeference data for the marine field. | |
+ | The service is equipped with a cluster of Geoservers and Thredds services which physically host the data. In particular the following data can be queried and retrieved: | ||
+ | |||
+ | * AquaMaps distribution Maps (http://aquamaps.org) | ||
+ | * FAO GeoNetwork map (http://www.fao.org/geonetwork) | ||
+ | * MyOcean Environemental data (http://www.myocean.eu/) | ||
+ | * WordClim global climate layers (http://www.worldclim.org/) | ||
+ | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | + | ||
+ | The Geonetwork service can be used in order to retrieve GIS information for a given marine species. Data can be accessed trough standard protocol as WMS and WFS | ||
==== iMarine Biodiversity Data Service ==== | ==== iMarine Biodiversity Data Service ==== | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=====Description===== | =====Description===== | ||
− | ( | + | |
+ | The Species Product Disvocery WS giving access to Biodiversity data coming from several providers ( OBIS, GBIF, CoL..) | ||
+ | |||
+ | The client API discovers automatically the endpoint of the service from the iMarine Information System. | ||
+ | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | |||
− | ==== | + | The service can be used to retrieve Occurrence points and Taxon information coming from the available Data Providers for a given marine species. Data can be extracted in csv and DwC-A format. |
− | + | ||
− | + | ==== Aquatic Species Fact Sheets ==== | |
− | |||
− | |||
=====Description===== | =====Description===== | ||
− | + | ||
+ | Aquatic Species Fact Sheets provided by FAO. | ||
+ | |||
+ | * http://www.fao.org/fishery/species/search/en | ||
+ | |||
+ | VRMF species factsheet data extraction API | ||
+ | * http://figisapps.fao.org/vrmf/samples/species/FS/ | ||
+ | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | + | ||
+ | This service can be used to extract aquatic species FactSheets either in json or csv format. | ||
+ | |||
+ | ===APIs=== | ||
+ | ==== SPARQL Client ==== | ||
+ | Any SPARQL client available on the Web | ||
==== GeoNetwork Client ==== | ==== GeoNetwork Client ==== | ||
+ | |||
+ | =====Description===== | ||
Wiki | Wiki | ||
* https://gcube.wiki.gcube-system.org/gcube/index.php/GeoNetwork_library | * https://gcube.wiki.gcube-system.org/gcube/index.php/GeoNetwork_library | ||
+ | |||
+ | A library to interact with GeoNetwork's REST Interface to publish/modify/delete and search for Metadata.The library is designed on top of geoserver-manager library, developed by GeoSolutions. Metadata objects managed by the library are compliant to standard specification ISO 19115:2003/19139. | ||
+ | |||
+ | =====Exploitation Example===== | ||
Javadoc | Javadoc | ||
* http://www.gcube-system.org/javadocs/2-14-0/geonetwork_1.0.1-2.14.0/ | * http://www.gcube-system.org/javadocs/2-14-0/geonetwork_1.0.1-2.14.0/ | ||
− | |||
− | |||
− | |||
− | |||
==== SPD Client ==== | ==== SPD Client ==== | ||
+ | |||
+ | |||
+ | =====Description===== | ||
Wiki | Wiki | ||
* https://gcube.wiki.gcube-system.org/gcube/index.php/Species_Product_Discovery:_client_library | * https://gcube.wiki.gcube-system.org/gcube/index.php/Species_Product_Discovery:_client_library | ||
− | + | The SPD Client can be used to access a Biodiversity data broker implemented in iMarine, the SPD service. More details about the architecture of the service are available at | |
+ | https://gcube.wiki.gcube-system.org/gcube/index.php/Biodiversity_Access | ||
− | |||
− | |||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | |||
− | + | The client can be used for example to query the OBIS data source and return the taxonomic information related to '''shark''' | |
− | + | ||
− | + | ||
− | + | <pre> | |
− | + | ||
+ | ScopeProvider.instance.set("/d4science.research-infrastructures.eu/gCubeApps"); | ||
+ | Manager manager = manager().withTimeout(3, TimeUnit.MINUTES).build(); | ||
+ | |||
+ | Stream<ResultElement> taxa = manager.search("SEARCH BY CN 'shark' RESOLVE WITH OBIS EXPAND IN OBIS RETURN Taxon"); | ||
+ | |||
+ | while (taxa.hasNext()){ | ||
+ | TaxonomyItem taxon = (TaxonomyItem)taxa.next(); | ||
+ | System.out.println(taxon.getAuthor()+" "+taxon.getRank()+" "+taxon.getScientificName()); | ||
+ | while ((taxon=taxon.getParent())!=null) | ||
+ | System.out.println(taxon.getScientificName()+" -- "+taxon.getRank()); | ||
+ | } | ||
+ | |||
+ | </pre> | ||
+ | |||
+ | ==== gCube Search client ==== | ||
=====Description===== | =====Description===== | ||
− | ( | + | Wiki |
+ | * https://gcube.wiki.gcube-system.org/gcube/index.php/Search_2_Framework_(NEW) | ||
=====Exploitation Example===== | =====Exploitation Example===== | ||
− | + | Javadoc | |
+ | * http://www.gcube-system.org/javadocs/2-14-0/search-client-library_1.0.1-2.14.0/ | ||
=== Artifacts === | === Artifacts === | ||
− | + | The software distributed by iMarine ( gCube ) is available trough Maven repositories. The following setting.xml configuration file should be set up: | |
+ | |||
+ | <pre> | ||
+ | <settings xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" | ||
+ | xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> | ||
+ | |||
+ | |||
+ | |||
+ | <profiles> | ||
+ | <profile> | ||
+ | <id>gcube</id> | ||
+ | <repositories> | ||
+ | <repository> | ||
+ | <id>gcube-releases</id> | ||
+ | <name>gCube Releases</name> | ||
+ | <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-releases</url> | ||
+ | <releases> | ||
+ | <enabled>true</enabled> | ||
+ | </releases> | ||
+ | <snapshots> | ||
+ | <enabled>false</enabled> | ||
+ | </snapshots> | ||
+ | </repository> | ||
+ | <repository> | ||
+ | <id>gcube-externals</id> | ||
+ | <name>gCube Externals</name> | ||
+ | <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-externals</url> | ||
+ | <snapshots> | ||
+ | <enabled>false</enabled> | ||
+ | </snapshots> | ||
+ | <releases> | ||
+ | <enabled>true</enabled> | ||
+ | </releases> | ||
+ | </repository> | ||
+ | </repositories> | ||
+ | |||
+ | <pluginRepositories> | ||
+ | <pluginRepository> | ||
+ | <id>gcube-releases</id> | ||
+ | <name>gCube Releases</name> | ||
+ | <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-releases</url> | ||
+ | <releases> | ||
+ | <enabled>true</enabled> | ||
+ | </releases> | ||
+ | <snapshots> | ||
+ | <enabled>false</enabled> | ||
+ | </snapshots> | ||
+ | </pluginRepository> | ||
+ | <pluginRepository> | ||
+ | <id>gcube-externals</id> | ||
+ | <name>gCube Externals</name> | ||
+ | <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-externals</url> | ||
+ | <snapshots> | ||
+ | <enabled>false</enabled> | ||
+ | </snapshots> | ||
+ | <releases> | ||
+ | <enabled>true</enabled> | ||
+ | </releases> | ||
+ | </pluginRepository> | ||
+ | </pluginRepositories> | ||
+ | |||
+ | </profile> | ||
+ | </profiles> | ||
+ | |||
+ | <activeProfiles> | ||
+ | <activeProfile>gcube</activeProfile> | ||
+ | </activeProfiles> | ||
+ | </settings> | ||
+ | </pre> | ||
+ | |||
+ | or the same settings included in your pom file. The maven coordinates of the components to use for the challenges are documented in the related wikis. | ||
==External Links== | ==External Links== | ||
* [http://wiki.agroknow.gr/agroknow/index.php/BlueHackathon2013 Blue Hackathon Event Home Page] | * [http://wiki.agroknow.gr/agroknow/index.php/BlueHackathon2013 Blue Hackathon Event Home Page] | ||
* [http://wiki.agroknow.gr/agroknow/index.php/BlueHackathon2013-datasets BlueHackathon2013-datasets] | * [http://wiki.agroknow.gr/agroknow/index.php/BlueHackathon2013-datasets BlueHackathon2013-datasets] |
Latest revision as of 10:10, 5 July 2013
Data Challenges
Challenge #1
Enrich HTML web content with RDF annotation, and enable annotation-based document discovery
Background
Most information resources in the "Blue" domain were created without the exploitation by advanced search and discovery mechanisms in mind. They thus lack the semantic richness that would improve their visibility, usefulness, and quality.
One cost-effective opportunity to overcome this limitation may be the addition of RDFa to existing datasets. This can be achieved by a mechanism that extracts concepts from a html-text, aligns these with concepts from a semantic KB, and returns the uri's that can be attached to the source, either off-line, as header metadata, or in-line.
Proving that such a mechanisms can effectively enrich a 'flat' resource with interpretable rdf will present evidence for data owners in the "Blue" domain that they can add value to their resources with limited costs with the help of semantechnicians.
Objectives
We ask the hackathon participants to find a technical solution to enrich the factsheets of the FIGIS portal with annotations in RDFa format. The annotation will consist at least the URIs of the entities referenced in the factsheet, and of set of relevant relations provided with the datasets.
We ask the hackathon to:
- GOAL: Provide an RDFa client to
- extract concepts from fact-sheets, e.g. accessing the fact-sheet content using the service provided here
- identify uri's from several KB's,
- create the RDF annotations, and
- expose these RDF annotations.
- GOAL: Use the annotations produced at item one, as input to online search of factsheets (publication, GIS maps, images, statistical timeseries), to create enhanced discovery facility that complement the web page information content.
- GOAL: Retrieve a set of fact-sheets via online search services.
- GOAL: Write RDFa to these factsheets.
Challenges
TBD
Datasets
- FAO Species
- KB's:
APIs
See below.
Challenge #2
Search Results presentation exploitation.
Search results regarding marine data could be enriched in order to provide advanced experience to the user. Derived information could be injected into results regarding identification of special keywords (related to the query), with results retrieved by OpenSearch and other external(?) datasources. Also exploration of the results could be improved from simple browsing into information discovery, providing accumulated information, filtering, suggestions etc.
Background
(Why this is relevant to blue-er world)
Objectives
- We ask the hackathon participants to enrich the search results retrieved from iMarine Collections by identifying special keywords (related to the topic) with results retrieved from OpenSearch and other external(?) datasources.
- We ask the hackathon participants to explore the database by performing a number of predefined queries and keep statistics on them in order to enhance the existing browsing methods
Challenges
TBD
Datasets
APIs
Challenge #3
Processing and Visualization of data sets
Exploit geolocation of real-world data in order to calculate and visualize geographical information and trends (i.e. migration of species). Support interactive map search over multiple sources, combined and enriched results. Search results will be presented on a map with possible options of clustering, filtering etc. User could also interact with results, like clicking on a result or location would show related results, helpful things etc.
Background
(Why this is relevant to blue-er world)
Objectives
- Exploit the species occurrences data in order to calculate and visualize geographical trends (i.e. migration of species).
- Interactive Map Search. Search over data of multiple source, combine them and enrich results. Search results can be presented on a map.
- clustering, filtering
- trend identification
- interact with results, like clicking on a result or location would show related results, helpful things etc
Challenges
TBD
Datasets
APIs
Datasets and APIs
Datasets
TLO based SPARQL endpoint
Data Graph
Description
The description of the MarineTLO can be found here:
http://wiki.i-marine.eu/index.php/Top_Level_Ontology
Exploitation Example
(How can be used within a challenge)
FAO FLOD
FLOD SPARQL endpoint
- http://www.fao.org/figis/flod/endpoint/flod (Jena Joseki)
Description
The Fisheries Linked Open Data (FLOD) stems from a rising trend initiative known as Linked Open Data. It is dedicated to create a dense network of relationships among the entities of the Fishery domains, and to programmatically serve them to semantic and traditional application environments. It started with the objective to identify and interlink equivalent codes from different code lists in use by FIGIS, in order to consolidate the information referenced by each different code, and then expanded to include external data source such as NAFO, EU, and ICCAT. Currently the FLOD network includes entities and relationships from the the domains of Marine Species, Water Areas, Land Areas, Exclusive Economic Zones. It serves software applications in the domain of statistics, and GIS. The FLOD content is exposed via either SPARQL endpoints (suitable for semantic applications), or via JAVA API to be embedded in consumers' application code.
Exploitation Example
Query for entities of kind:
- Gear types
- Vessel types
- Marine species
- Fishing Areas
- Statistical countries (Flagstate)
- Regional Fisheries Bodies
Ecoscope
Description
Knowledge base on Exploited Marine Ecosystems, the repository gives access to a series of information related species, fishing vessels, agents, information resources ( images, databases, spatial information, publication and plots)
The access to the information can be trough SPARQL or Opensearch
SPARQL Endpoint:
Opensearch description document
Genesi DEC
Description
The Genesi DEC project established establish open data and services access, allowing European and worldwide Digital Earth Communities to seamlessly access, produce and share data, information, products and knowledge. This creates a multi-dimensional, multi-temporal, and multi-layer information facility of huge value in addressing global challenges such as biodiversity, climate change, pollution and economic development.
http://www.genesi-dec.eu/search/
Exeploitation Example
iMarine GeoNetwork
Description
The iMarine Geonetowrk service is the entry point for tbe discovery and access to many type of Georeference data for the marine field. The service is equipped with a cluster of Geoservers and Thredds services which physically host the data. In particular the following data can be queried and retrieved:
- AquaMaps distribution Maps (http://aquamaps.org)
- FAO GeoNetwork map (http://www.fao.org/geonetwork)
- MyOcean Environemental data (http://www.myocean.eu/)
- WordClim global climate layers (http://www.worldclim.org/)
Exploitation Example
The Geonetwork service can be used in order to retrieve GIS information for a given marine species. Data can be accessed trough standard protocol as WMSSee Workload Management System or Web Mapping Service. and WFSWeb Feature Service
iMarine Biodiversity Data Service
Description
The Species Product Disvocery WS giving access to Biodiversity data coming from several providers ( OBIS, GBIF, CoL..)
The client API discovers automatically the endpoint of the service from the iMarine Information System.
Exploitation Example
The service can be used to retrieve Occurrence points and Taxon information coming from the available Data Providers for a given marine species. Data can be extracted in csv and DwC-A format.
Aquatic Species Fact Sheets
Description
Aquatic Species Fact Sheets provided by FAO.
VRMF species factsheet data extraction API
Exploitation Example
This service can be used to extract aquatic species FactSheets either in json or csv format.
APIs
SPARQL Client
Any SPARQL client available on the Web
GeoNetwork Client
Description
Wiki
A library to interact with GeoNetwork's REST Interface to publish/modify/delete and search for Metadata.The library is designed on top of geoserver-manager library, developed by GeoSolutions. Metadata objects managed by the library are compliant to standard specification ISO 19115:2003/19139.
Exploitation Example
Javadoc
SPD Client
Description
Wiki
The SPD Client can be used to access a Biodiversity data broker implemented in iMarine, the SPD service. More details about the architecture of the service are available at
https://gcube.wiki.gcube-system.org/gcube/index.php/Biodiversity_Access
Exploitation Example
The client can be used for example to query the OBIS data source and return the taxonomic information related to shark
ScopeProvider.instance.set("/d4science.research-infrastructures.eu/gCubeApps"); Manager manager = manager().withTimeout(3, TimeUnit.MINUTES).build(); Stream<ResultElement> taxa = manager.search("SEARCH BY CN 'shark' RESOLVE WITH OBIS EXPAND IN OBIS RETURN Taxon"); while (taxa.hasNext()){ TaxonomyItem taxon = (TaxonomyItem)taxa.next(); System.out.println(taxon.getAuthor()+" "+taxon.getRank()+" "+taxon.getScientificName()); while ((taxon=taxon.getParent())!=null) System.out.println(taxon.getScientificName()+" -- "+taxon.getRank()); }
gCube Search client
Description
Wiki
Exploitation Example
Javadoc
Artifacts
The software distributed by iMarine ( gCube ) is available trough Maven repositories. The following setting.xml configuration file should be set up:
<settings xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <profiles> <profile> <id>gcube</id> <repositories> <repository> <id>gcube-releases</id> <name>gCube Releases</name> <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-releases</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>gcube-externals</id> <name>gCube Externals</name> <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-externals</url> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>gcube-releases</id> <name>gCube Releases</name> <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-releases</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>gcube-externals</id> <name>gCube Externals</name> <url>http://maven.research-infrastructures.eu/nexus/content/repositories/gcube-externals</url> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>gcube</activeProfile> </activeProfiles> </settings>
or the same settings included in your pom file. The maven coordinates of the components to use for the challenges are documented in the related wikis.