Tutorial on Evaluation of Semantic Web Technologies at ESWC 2009
Duration: Half Day
Roughly 10 years after the vision of the Semantic Web was first presented, Semantic Web Technologies have become a well-established pillar of computer science research. With the increasing number of technologies being developed, the problem of how to compare and evaluate the various approaches gains more and more importance. Such evaluation is critical not only for future scientific progress, but also for the industrial adoption of the developed technologies. This tutorial presents the state of the art of Semantic Web technology evaluation. It is targeted at researchers and practitioners interested in learning how to investigate the strengths and shortcomings of Semantic Web technologies.
Aims and Target Audience
The tutorial presents the recent developments and trends in the area of Semantic Web technology evaluation. The target audience includes researchers and practitioners that either use or develop such technology. Users of Semantic Web technology will learn how to find the technology suitable for their needs, how to investigate the pros and cons of competing approaches and ultimately how to identify the approach most suitable to solve their problem at hand. Researchers will learn about methodologies and initiatives for the evaluation of their research results. The purpose of such evaluation is not to show one approach superior to another, but to develop a better understanding for the strengths and shortcomings of the various approaches and to mutually learn from the analysis of their causes.
The tutorial is structured in one half-hour introduction and four lessons, each an hour long (including breaks). The introduction will present an overview of the current state of Semantic Web technology evaluation and the four lessons will each present the evaluation of different aspects of Semantic Web technologies. Short demos will be used to illustrate the presented evaluation approaches.
The introduction presents the current state of Semantic Web technology evaluation, focusing on the methodological aspects of Semantic Web technology evaluation and benchmarking as well as on community-driven evaluation initiatives.
The first lesson deals with evaluating the interoperability of Semantic Web technologies. Semantic Web technologies need to interchange ontologies for further use and, due to the heterogeneity in the knowledge representation formalisms of the different existing technologies, interoperability is a problem in the Semantic Web and the limits of the interoperability of current technologies are yet unknown. This lesson will present the basics of Semantic Web technology interoperability evaluation and the UPM Framework for Benchmarking Interoperability, an evaluation infrastructure that includes all the resources (experiment definitions, benchmark suites and tools) needed for benchmarking the interoperability of Semantic Web technologies using RDF(S) and OWL as interchange languages. Besides, it will present how this framework was applied in two interoperability benchmarking activities carried out over Semantic Web technologies.
The second lesson presents initiatives and methodologies for the evaluation of Semantic Web Service technology. Semantically annotated Web services bring the ideas of the Semantic Web to the Service Oriented Computing paradigm. They aim at facilitating the automation of discovery, mediation, and composition of Web Services using semantic annotations. Various formalisms and frameworks towards this vision have been put forward in recent years, most notably OWL-S, WSMO, and the recent W3C recommendation SAWSDL. Within this session, the current approaches to evaluate SWS technology will be introduced. This particularly regards the ongoing international initiatives in the area, i.e. the Semantic Web Services Challenge and the S3 Contest on Semantic Service Selection . Lessons learned within these initiatives so far will be discussed and an overview of possible future developments in the area will be given.
The third lesson focuses on the evaluation of ontology matching algorithms. We will introduce the basic modalities of evaluating such algorithms, in relation with the five Ontology Alignment Evaluation Initiative campaigns. We will present several concrete evaluation settings. Then, we will consider several important problems tied to this evaluation: the difference between output evaluation and application evaluation, the problem of consensus over expected results, what semantics changes with regard to traditional frameworks, and the benefits of automation.
Finally, the fourth lesson covers the evaluation of Semantic Web storage and retrieval systems. We will give an overview of the requirements that arise for Semantic Web storage systems within common application scenarios and discuss how these requirements are reflected by current benchmarks, such as LUBM, SP2Bench and BSBM. We will conclude with a recommendation about which benchmarks should meaningfully be applied for which scenario and give an overview of possible future work in the area.
Prerequisites and Technical Requirements
A basic understanding for the terms and technologies of the Semantic Web (the Semantic Web stack, RDF, SPARQL, OWL, etc.) will be assumed. Other than that, no special skills are required for the tutorial. There are no special technical requirements for the tutorial, except for a standard projector and a screen. Depending on the size of the room and the audience, a microphone and speakers may be useful.
- Introduction by Raúl García-Castro
- Interoperability by Raúl García-Castro
- Semantic Web Services by Ulrich Küster
- Ontology Alignment by Jérôme Euzenat
- Storage and retrieval systems by Chris Bizer
Ulrich Küster is a researcher at the Friedrich Schiller University Jena, Germany where he is pursuing a Ph.D. in the area of Semantic Web services. The main focus of his work is evaluation of SWS technology. He co-chairs the Semantic Web Service Challenge, is member of the steering committee of the S3 Contest on Semantic Service Selection and was also a member of the W3C SWS Testbed Incubator Group, which has worked towards standardizing a methodology for SWS evaluation. He has co-organized several various SWS evaluation workshops and has several years of teaching experience.
Raúl García-Castro is a Research Associate at the Computer Science School at the Universidad Politécnica de Madrid (UPM), where he obtained his Ph.D. titled “Benchmarking Semantic Web technology”. He is one of the developers of the WebODE ontology engineering workbench and his research focuses on the evaluation and benchmarking of Semantic Web technologies. He led two activities for benchmarking the interoperability of Semantic Web technologies using RDF(S) and OWL as interchange language. He was part of the organisation committee of ESWC2008 and co-organized the ISWC2007 and ESWC2008 workshops on Evaluation of Ontology Tools (EON2007, EON-SWSC2008). He is a member of the Test Beds and Challenges service of STI International. He has given several tutorials on the topics of ontologies and the Semantic Web.
Jérôme Euzenat is senior research scientist at INRIA (Montbonnot, France). He has set up and leads the INRIA Exmo project-team devoted to “Computer-mediated communication of structured knowledge”. The project is also part of the Laboratoire d’Informatique de Grenoble (Grenoble computer science lab). Jérôme Euzenat is the main leader of the Ontology Alignment Evaluation Initiative that has organised five evaluation campaigns since 2004. With Pavel Shvaiko, he taught the Ontology and schema matching tutorial at ESWC-2005, which led to the Ontology matching book. He has given lectures on this topic a dozen of times since then.
Christian Bizer is professor for Web-based systems at Freie Universität Berlin. The results of his work include the Named Graphs data model, which was adopted into the W3C SPARQL standard; the Fresnel display vocabulary implemented by several Semantic Web browsers; the D2RQ mapping language, which is widely used for mapping relational databases to the Semantic Web; and the Berlin SPARQL Benchmark. He takes a leading role in the W3C Linking Open Data community effort and the DBpedia project, which both aim at interlinking large numbers of data sources on the Web.