RecSys

From ESWC 2014 Challenges WIKI
Jump to: navigation, search

ESWC-14 Challenge: Linked Open Data-enabled Recommender Systems

MOTIVATION AND OBJECTIVES

People generally need more and more advanced tools that go beyond those implementing the canonical search paradigm for seeking relevant information. A new search paradigm is emerging, where the user perspective is completely reversed: from finding to being found. Recommender systems may help to support this new perspective, because they have the effect of pushing relevant objects, selected from a large space of possible options, to potentially interested users. To achieve this result, recommendation techniques generally rely on data referring to three kinds of objects: users, items and their relations. Recent developments in the Semantic Web community offer novel strategies to represent data about users, items and their relations that might improve the current state of the art of recommender systems, in order to move towards a new generation of recommender systems which fully understand the items they deal with. More and more semantic data are published following the Linked Data principles, that enable to set up links between objects in different data sources, by connecting information in a single global data space: the Web of Data. Today, the Web of Data includes different types of knowledge represented in a homogeneous form: sedimentary one (encyclopedic, cultural, linguistic, common-sense) and real-time one (news, data streams, ...). These data might be useful to interlink diverse information about users, items, and their relations and implement reasoning mechanisms that can support and improve the recommendation process. The primary goal of this challenge is twofold. On the one hand, we want to create a link between the Semantic Web and the Recommender Systems communities. On the other hand, we aim to show how Linked Open Data and semantic technologies can boost the creation of a new breed of knowledge-enabled and content-based recommender systems.


TARGET AUDIENCE

The target audience is all of those communities, both academic and industrial, which are interested in personalized information access with a particular emphasis on Linked Open Data. During the last ACM RecSys conference more than 60% of participants were from industry. This is for sure a witness of the actual interest of recommender systems for industrial applications ready to be released in the market.


HOW TO PARTICIPATE

1. Make your result submission

  • Register your group using the registration web form available at http://193.204.59.20:8181/eswc2014lodrecsys/signup.html.
  • Choose one or more tasks among Task1, Task2 and Task3 (see TASKS).
  • Build your Recommendation System using the training data described in section DATASET.
  • Evaluate your approach by submitting your results using the evaluation service as described in section EVALUATION.
  • Your final score will be the one computed with respect to the last result submission made before March 7, 2014, 23:59 CET.

2. Submit your paper

The following information has to be provided:

  • Abstract: no more than 200 words.
  • Description: It should contain the details of the system, including why the system is innovative, how it uses Semantic Web, which features or functions the system provides, what design choices were made, and what lessons were learned. The description should also summarize how participants have addressed the evaluation tasks. Papers must be submitted in PDF format, following the style of the Springer’s Lecture Notes in Computer Science (LNCS) series (http://www.springer.com/computer/lncs/lncs+authors), and not exceeding 5 pages in length.

All submissions should be provided via EasyChair https://www.easychair.org/conferences/?conf=eswc2014-challenges

We invite the potential participants to subscribe to our mailing list in order to be kept up to date with the latest news related to the challenge.

https://lists.sti2.org/mailman/listinfo/eswc2014-recsys-challenge


TASKS

Task 1: Rating prediction in cold-start situations

This task deals with the rating prediction problem, in which a system is requested to estimate the value of unknown numeric scores (a.k.a. ratings) that a target user would assign to available items, indicating whether she likes or dislikes them. In order to favor the proposal of content-based, LOD-enabled recommendation approaches, and limit the use of collaborative filtering approaches, this task aims at predicting ratings in cold-start situations, that is, predicting ratings for users who have a few past ratings, and predicting ratings of items that have been rated by a few users. The dataset to use in the task - DBbook - relates to the book domain. It contains explicit numeric ratings assigned by users to books. For each book we provide the corresponding DBpedia URI. Participants will have to exploit the provided ratings as training sets, and will have to estimate unknown ratings in a non-provided evaluation set. Recommendation approaches will be evaluated on the evaluation set by means of metrics that measure the differences between real and estimated ratings, namely the Root Mean Square Error (RMSE).

Task 2: Top-N recommendation from binary user feedback

This task deals with the top-N recommendation problem, in which a system is requested to find and recommend a limited set of N items that best match a user profile, instead of correctly predict the ratings for all available items. Similarly to Task 1, in order to favor the proposal of content-based, LOD-enabled recommendation approaches, and limit the use of collaborative filtering approaches, this task aims at generating ranked lists of items for which no graded ratings are available, but only binary ones. Also in this case, the DBbook dataset is used. In this task, the accuracy of recommendation approaches will be evaluated on an evaluation set using the F-measure.

Task 3: Diversity

A very interesting aspect of content-based recommender systems and then of LOD-enabled ones is giving the possibility to evaluate the diversity of recommended items in a straight way. This is a very popular topic in content-based recommender systems, which usually suffer from over-specialization. In this task, the evaluation will be made by considering a combination of both accuracy (F-measure) of the recommendation list and the diversity (Intra-List Diversity) of items belonging to it. Also for this task, the DBbook dataset is used. Given the domain of books, diversity with respect to the two properties http://dbpedia.org/ontology/author and http://purl.org/dc/terms/subject will be considered.


DATASET

DBbook dataset

This dataset relies on user data and preferences retrieved from the Web. The books available in the dataset have been mapped to their corresponding DBpedia URIs. The mapping contains 8170 DBpedia URIs. These mappings can be used to extract semantic features from DBpedia or other LOD repositories to be exploited by the recommendation approaches proposed in the challenge. The dataset is split in a training set and an evaluation set. In the former, user ratings are provided to train a system while in the latter, ratings have been removed, and they will be used in the eventual evaluation step.

The mapping file is available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/DBbook_Items_DBpedia_mapping.tsv.zip It contains a Tab-separated values file where each line has the following format: DBbook_ItemID \t name \t DBpedia_uri. We suggest to extract a semantic description for all the items present in this mapping file by starting from the DBpedia URIs (see A bit of SPARQL).

The training sets are available at:

  • Task 2 and Task 3 http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/DBbook_train_binary.zip The archive contains a Tab-separated values file containing the training data and a README describing its content. Each line in the file is composed by: userID \t itemID \t rating. The ratings are in binary scale. 1 means that the item is relevant for the user, 0 means irrelevant. The training set contains 72372 ratings. There are 6181 users and 6733 items which have been rated by at least one user.


EVALUATION

To evaluate their approaches participants are asked to submit a file containing the recommendations or the rating predictions to the evaluation system using the web form available at http://193.204.59.20:8181/eswc2014lodrecsys/.

  • Task 1 Participants have to predict the missing ratings in the evaluation dataset available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/task1_useritem_evaluation_data.tsv.zip. They have to predict the missing rating for each user-item pair in the evaluation data according to the following format: userID \t itemID \t rating. The evaluation metric for this task is the RMSE.
  • Task 2 Participants have to compute recommendation lists according to the user-item pairs in the evaluation dataset available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/task2_useritem_evaluation_data.tsv.zip. Participants are asked to complete the user-item pairs in the evaluation data by adding the correspondent relevance score according to the following format: userID \t itemID \t score. These relevance scores will be used by the evaluation service to form a Top-5 item recommendation list for each user. This means that for each user only items in the evaluation set are considered to form the Top-5 recommendation list. The evaluation metric for this task is the F-measure@5.
  • Task 3 Participants are asked to submit a Top-20 recommendation list for each user. These recommendations lists have to be computed considering all unrated items by each user and selecting the Top-20. Also in this case the format of the file to submit is : userID \t itemID \t score. In this task the evaluation metric is the average between Intra-List Diversity (ILD@20) and F-measure@20.

A description of the metrics used in the evaluation of the different tasks is available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/eswc2014-lodrecsys-metrics_evaluationservice.pdf

Alternatively participants can also submit their results using the Java client available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/lodrecsys2014challenge_evaluation.jar by launching the following command:

java -jar lodrecsys2014challenge_evaluation.jar TaskNumber GroupID pathFile

USEFUL RESOURCES

Some useful LOD datasets

Within the LOD cloud there are many data related to the book domain you can retrieve via SPARQL queries. For sure DBpedia is a good point to start with with its endpoint is available at http://dbpedia.org/sparql , but the participants may also look, for instance, at the British Library Bibliography (http://bnb.data.bl.uk/flint) as well as to the datasets of the Library of Congress (http://id.loc.gov/)

A bit of SPARQL

SPARQL is the standard language to query LOD datasets. As an example, the following SPARQL query returns the author of the book War and Peace from the DBpedia endpoint

the
SELECT ?o 
WHERE { 
  <http://dbpedia.org/resource/War_and_Peace> <http://dbpedia.org/ontology/author> ?o.
}

while this query returns all the properties and corresponding values associated to the book War and Peace

SELECT ?p ?o 
WHERE { 
  <http://dbpedia.org/resource/War_and_Peace> ?p ?o.
}

Participants can use similar queries to extract data about all the books in DBbook (http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/DBbook_Items_DBpedia_mapping.tsv.zip).

For those who are not familiar with extracting data from LOD datasets and with SPARQL and are willing to learn more about it, there is a very good tutorial available at http://www.cambridgesemantics.com/semantic-university/sparql-by-example

The participants can also use the code provided at https://github.com/vostuni/SparqlClient. It is a Java Class for extracting data from DBpedia using SPARQL queries.

More datasets

In order to have a further offline evaluation of the proposed approaches, the participants may also use the LOD-enabled datasets on movies and music available at http://sisinflab.poliba.it/semanticweb/lod/recsys/datasets/

Evaluation tools

A Java package containing the implementation of some evaluation metrics is available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/ESWC2014ChallengeEvaluationTool_v1.3.zip. A description about the functionalities offered by this tool is available at http://sisinflab.poliba.it/semanticweb/lod/recsys/2014challenge/eswc2014-lodrecsys-evaluationtool.html. Participants can use the provided code for an offline evaluation of their approaches.

JUDGING AND PRIZES

After a first round of reviews, the Program Committee and the chairs will select a number of submissions that will have to satisfy the challenge requirements, and will have to be presented at the conference. Submissions accepted for presentation will receive constructive reviews from the Program Committee, and will be included in post-proceedings. All accepted submissions will have a slot in a poster session dedicated to the challenge. In addition, the winners will present their work in a special slot of the main program of ESWC’14, and will be invited to submit a paper to a dedicated Semantic Web Journal special issue.

For each task we will select:

  • the best performing tool, given to the paper which will get the highest score in the evaluation
  • the most original approach, selected by the Challenge Program Committee with the reviewing process

An amount of 700 euros has already been secured for the final prize. We are currently working on securing further funding

IMPORTANT DATES

  • EXTENDED March 7 March 14, 2014, 23:59 CET: Result submission due
  • EXTENDED March 14 March 21, 2014, 23:59 CET: Paper submission due
  • April 9, 2014, 23:59 CET: Notification of acceptance
  • May 27-29, 2014: The Challenge takes place at ESWC-14


CHALLENGE CHAIRS

  • Tommaso Di Noia - Polytechnic University of Bari, Italy
  • Iván Cantador - Universidad Autónoma de Madrid, Spain


EVALUATION COORDINATOR

  • Vito Claudio Ostuni - Polytechnic University of Bari, Italy


PROGRAM COMMITTEE

  • Pablo Castells, Universidad Autonoma de Madrid, Spain
  • Oscar Corcho, Universidad Politécnica de Madrid, Spain
  • Marco de Gemmis, University of Bari Aldo Moro, Italy
  • Frank Hopfgartner, Technische Universität Berlin, Germany
  • Andreas Hotho, Universität Würzburg, Germany
  • Dietmar Jannach, TU Dortmund University, Germany
  • Pasquale Lops, University of Bari Aldo Moro, Italy
  • Valentina Maccatrozzo, VU University Amsterdam, The Netherlands
  • Roberto Mirizzi, Polytechnic University of Bari, Italy
  • Alexandre Passant, seevl.fm, Ireland
  • Francesco Ricci, Free University of Bozen-Bolzano, Italy
  • Giovanni Semeraro, University of Bari Aldo Moro, Italy
  • David Vallet, NICTA, Australia
  • Manolis Wallace, University of Peloponnese, Greece
  • Markus Zanker, Alpen-Adria-Universitaet Klagenfurt, Austria
  • Tao Ye, Pandora Internet Radio, USA

ESWC CHALLENGE COORDINATOR

  • Milan Stankovic, Sépage & Université Paris-Sorbonne, France