FUSION
FUnctionality Sharing In Open eNvironments
Heinz Nixdorf Chair for Distributed Information Systems
 
Navigation

Measures for Benchmarking Semantic Web Service Matchmaking Correctness (SEALS Best Evaluation Paper Award)

Title: Measures for Benchmarking Semantic Web Service Matchmaking Correctness (SEALS Best Evaluation Paper Award)
Authors: Ulrich Küster and Birgitta König-Ries
Source: Proceedings of the 7th Extended Semantic Web Conference (ESWC10)
Place: Heraklion, Creete, Greece
Date: 2010-05-01
Type: Conference Paper
Abstract:

Semantic web services (SWS) promise to take service oriented computing to a new level by allowing to semi-automate time-consuming programming tasks. At the core of SWS are solutions to the problem of SWS matchmaking, i.e., the problem of filtering and ranking a set of services with respect to a service query.
Comparative evaluations of different approaches to this problem form the base for future progress in this area. Reliable evaluations require informed choices of evaluation measures and parameters.
This paper establishes a solid foundation for such choices by providing a systematic discussion of the characteristics and behavior of various retrieval correctness measures in theory and through experimentation.

File: ESWC2010.pdf
Slides: ESWC2010_ukuester.pdf
BibTex:
@INPROCEEDINGS{KK10,
  author = {Ulrich K\"uster and Birgitta K\"onig-Ries},
  title = {Measures for Benchmarking Semantic Web Service Matchmaking Correctness},
  booktitle = {Proceedings of the 7th Extended Semantic Web Conference (ESWC2010)},
  year = {2010},
  month = {May},
  address = {Heraklion, Crete, Greece},
  abstract = {Semantic web services (SWS) promise to take service oriented computing
	to a new level by allowing to semi-automate time-consuming programming
	tasks. At the core of SWS are solutions to the problem of SWS matchmaking,
	i.e., the problem of filtering and ranking a set of services with
	respect to a service query. 
	
	Comparative evaluations of different approaches to this problem form
	the base for future progress in this area. Reliable evaluations require
	informed choices of evaluation measures and parameters. 
	
	This paper establishes a solid foundation for such choices by providing
	a systematic discussion of the characteristics and behavior of various
	retrieval correctness measures in theory and through experimentation.}
}