Author Archives: selene

Vacancy @ CLTL: Scientific Programmer

Immediate Opening in the following position: Scientific Programmer (Dutch below)

Pia_Vacancy_New

The Computational Lexicology and Terminology Lab (CLTL), led by Spinoza prize winner Prof. dr. Piek Vossen, is looking for a scientific programmer with an interest in language technology.

Function title: Scientific Programmer
Fte: 0.6-0.8
VU Faculty: Humanities
Vacancy number: 17341
Closing date: Open until filled

Location: Vrije Universiteit Amsterdam, Netherlands

In the NewsReader project, CLTL has developed a pipeline architecture containing software modules with which Dutch texts can be interpreted semantically. De software determines which events are named, who is involved, where, and when the events have taken place, what the sentiment of the named sources of the events is, et cetera. These interpretations are stored in XLM format, the so called Natural Language Annotation Format (NAF). Furthermore, the software generates a representation in RDF supporting (automatic) reasoning over the data. De RDF representations are stored in the so called Triple store and can be queried by means of SPARQL. The candidate takes care of the management of this unique Dutch Natural Language Processing (NLP) pipeline.

The tasks are being executed in the context of the NWO (Netherlands Organisation for Scientific Research)-roadmap project CLARIAH. In the project cooperations take place with researchers across the Netherlands to develop a research infrastructure for the humanities. There are also cooperations with the department of computer science at the VU, with the eScience institute to develop demonstrators, and with researchers abroad.

Requirements
Candidate is expected to support maintenance, usage and further development of the pipeline mentioned above:

• Standardisation and meta data management;
• Software release and versioning of modules;
• Testing;
• Logging;
• Distributive parallel installations and processing;
• Compilation, installation and packaging (e.g.: VMs, Docker);
• Process management;
• Integration in Virtual Research Environment for students and researchers in the Humanities;
• Installation and maintenance of demonstrators.

Ideal Applicant Requirements:
• MSc/MA in computer science/computational linguistics or equivalent title and/or experience;
• Extensive experience in programming languages among which Java and Python;
• Extensive experience with Unix-like systems (Linux and Mac);
• Experience with working within a team of researchers;
• Service oriented;
• Experience with large scale and complex Big Data processing flows;
• Knowledge of standardisation of data in both NLP and Semantic Web;
• Experience with NLP software;
• (preferably) Experience with Sparql and triple stores;
• (preferably) Experience with web-based clients for visualisation and demonstration.

Further particulars
The appointment will be initially for a period of 1 year with the possibility of an extension.

For the completion of the CLARIAH tasks a minimum of 0.6 fte and a maximum 0.8 fte is required.

You may find information about our excellent fringe benefits of employment at www.workingatvu.nl including:
• remuneration of 8,3% end-of-year bonus and 8% holiday allowance;
• solid pension scheme (ABP);
• a minimum of 29 holidays in case of full-time employment.

Salary
The salary will be depending on education and experience, and range from a minimum of € 2.588,- gross per month up to a maximum of € 4.084,- gross per month (salary scale 10) based on a fulltime employment.

Information
For additional information please contact:
Prof dr. Piek Vossen
phone: 020 59 86457
e-mail: piek.vossen@vu.nl
website: www.cltl.nl

Application
Applicants are requested to write a letter in which they describe their abilities and motivation, accompanied by a curriculum vitae and a list of software projects executed and publications.

Please send your application to: piek.vossen@vu.nl

Vrije Universiteit Amsterdam
Attn. Faculty of Humanities
Prof dr. Piek Vossen

Please mention the vacancy number in the e-mail header or at the top of your letter and on the envelope.

Any other correspondence in response to this advertisement will not be dealt with.

Vrije Universiteit Amsterdam
Vrije Universiteit Amsterdam is a leading, innovative and growing university that is at the heart of society and actively contributes to new developments in teaching and research. Our university has ten faculties which span a wide range of disciplines, as well as several institutes, foundations, research centres, and support services. Its campus is located in the fastest-growing economic region in the Netherlands (the Zuidas district of Amsterdam), and provides work for over 4,500 staff and scientific education for more than 23,000 students

Pia_Vacancy

Functietitel: Wetenschappelijk Programmeur
Fte: 0.6-0.8
VU eenheid: FGW
Vacaturenummer: 17341
Sluitingsdatum: Open tot ingevuld

Het Computational Lexicology and Terminology Lab (CLTL) onder leiding van Spinozaprijswinnaar Prof. Dr. Piek Vossen zoekt per direct een wetenschappelijke programmeur met interesse voor taaltechnologie. CLTL heeft in het NewsReader project een pipeline architectuur ontwikkeld met software modules waarmee Nederlandse teksten semantisch geïnterpreteerd kunnen worden. De software bepaalt welke gebeurtenissen worden genoemd, wie er bij betrokken zijn, waar en wanneer die hebben plaatsgevonden, wat het sentiment is van de genoemde bronnen over die gebeurtenissen, etc. Deze interpretaties worden opgeslagen in een XML formaat, het zogenaamde Natural Language Annotation Format (NAF). Verder genereert de software een representatie in RDF die het mogelijk maakt om over de data te redeneren. De RDF representaties worden opgeslagen in een zogenaamde Triple store waar ze door middel van SPARQL bevraagd kunnen worden. De kandidaat zal zorgdragen voor het beheer van deze unieke Nederlandse Natural Language Processing (NLP) pipeline.

De werkzaamheden worden uitgevoerd in het NWO-roadmap project CLARIAH waarin samengewerkt wordt met onderzoekers uit heel Nederland om een onderzoeksinfrastructuur te ontwikkelen voor de geesteswetenschappen. Verder wordt er samengewerkt met het departement van computer science bij de VU, het eScience instituut voor demonstrators en met buitenlandse onderzoekers.

Functie-inhoud
De kandidaat wordt verwacht ondersteuning te leveren aan het onderhoud, gebruik en verdere ontwikkeling van deze pipeline:
• Standaardisatie en metadatabeheer;
• Software release en versioning van modules;
• Testing;
• Logging;
• Distributieve parallelle installaties en processing;
• Compilatie, installatie en packaging (bijv. VMs, Docker);
• Procesmanagement;
• Integratie in Virtual Research Environment voor studenten en onderzoekers in de geesteswetenschappen;
• Installatie en onderhoud van demonstrators.

Functie-eisen
• MA in computer science of een vergelijkbare titel en/of ervaring;
• Ruime ervaring met diverse programmeertalen, waaronder Java en Python;
• Ruime ervaring met Unix-achtige systemen (Linux en Mac);
• Ervaring in het werken binnen een team van onderzoekers;
• Servicegericht zijn.
• Ervaring met grootschalige en complexe Big Data processing flows;
• Kennis van standaardisatie van data in zowel NLP als Semantic Web;
• Ervaring met het werken met NLP software;
• (bij voorkeur) ervaring met Sparql en triple stores;
• (bij voorkeur) ervaring met web-based clients voor visualisatie en demonstratie;

Bijzonderheden
De arbeidsovereenkomst wordt in eerste instantie aangegaan voor een periode van
1 jaar. Verlenging van de arbeidsovereenkomst behoort tot de mogelijkheden.
Voor de CLARIAH werkzaamheden is een invulling van minimaal 0.6 fte en maximaal 0.8 fte nodig.

De Vrije Universiteit heeft aantrekkelijke secundaire arbeidsvoorwaarden en regelingen zoals:
• 8,3% eindejaarsuitkering en 8% vakantietoeslag;
• Goede pensioenregeling (ABP);
• Minimaal 29 vakantiedagen bij volledige arbeidsduur;

Salaris
Het salaris bedraagt afhankelijk van opleiding en ervaring minimaal € 2.588,- en maximaal € 4.084,- bruto per maand (salarisschaal 10) bij een voltijds dienstverband.

Informatie
Voor meer informatie kunt u contact opnemen met:
Prof dr. Piek Vossen
tel.: 020 59 86457
e-mail: piek.vossen@vu.nl
website: www.cltl.nl

Sollicitatie
Kandidaten kunnen solliciteren naar deze functie door een motivatiebrief, curriculum vitae en lijst van uitgevoerde softwareprojecten en publicaties onder vermelding van het vacaturenummer in de e-mail header te sturen aan:

Vrije Universiteit Amsterdam
T.a.v. Faculteit der Geesteswetenschappen
Prof dr. Piek Vossen

Het vacaturenummer graag vermelden in de e-mail header of linksboven op uw brief en envelop.

De Vrije Universiteit Amsterdam (VU) is een vooraanstaande, innovatieve en groeiende universiteit die midden in de samenleving staat en actief bijdraagt aan de ontwikkelingen in onderwijs en onderzoek. Onze breed georiënteerde universiteit telt tien faculteiten, verschillende instituten, stichtingen en onderzoekscentra, en ondersteunende diensten. Op de campus aan de snelst groeiende economische regio van Nederland (de Zuidas), werken ruim 4.500 medewerkers en volgen ruim 23.000 studenten wetenschappelijk onderwijs.

Acquisitie naar aanleiding van deze advertentie wordt niet op prijs gesteld.

Research Masters meet Language Industry

MEET & GREET Human Language Technology (CLTL) & Language Industry

20171207_HLT_FooterThe Computational Lexicology and Terminology Lab (CLTL) organized a MEET & GREET between companies and master students on Friday December 08, 2017 13:30 – 18:00.

Research Masters Meet Language Industry
In the afternoon of Friday December 8th, 2017m students from the Humanities Research Master meet companies and organizations interested in students in Language Technology and other disciplines for internships and theses. The meeting is organized by the Computational Lexicology and Terminology Lab at the VU, in cooperation with the VU Humanities Graduate School.

CLTL is one of the world’s leading research institutes in Human Language technology. Prof. Dr. Piek Vossen, recipient of the NWO Spinoza Prize, heads the group of international researchers that are working on interdisciplinary projects, including the Spinoza project ‘Understanding Language by Machines’. At CLTL we are training the next generation language technology experts. The two-year Research Master Human Language Technology is a program by CLTL.

The Meet & Greet is an excellent opportunity to introduce your company or organisation to Human Language Technology students, and for master students to present their research topic or area of expertise to you.

Join our afternoon program in the presence of the Reference Machine, LeoLani a Pepper robot!

Location
Lecture hall HG 10A.00 (main building at Floor 10, Wing: A), Main building , Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam.

Program
13:30 – 14:00 Walk-In / Doors open / Registration & Coffee
14:00 – 14:05 Introduction: Prof. Dr. Piek Vossen
14:05 – 14:45 Company pitches I
14:45 – 15:15 Student pitches I
15:15 – 15:30 Coffee Break
15:30 – 16:15 Company pitches II
16:15 – 16:45 Student pitches II
16:45 – 17:00 Q&A Reference Machine
17:00 – 18:00 Networking drinks

Pepper_Reference_Machine LeoLani, a Reference Machine

9th Global WordNet Conference Jan. 8—12, 2018

GWC 2018

The 9th Global WordNet Conference

8 — 12 January, 2018
Conference venue: Nanyang Technological University (NTU), Singapore

GWC 2018 The 9th Global WordNet Conference

Registration is now open

Who’s Going? Please also attend the event on Facebook

The ninth Global WordNet Conference (GWC 2018) is an opportunity for researchers and developers to present and discuss their latest results on the development, enrichment and exploitation of wordnets for various languages around the world.

This conference is hosted by the Computational Linguistics Lab at Nanyang Technological University, Singapore and the Global WordNet Association.

Conference Chairs:

Christiane Fellbaum, fellbaum@princeton.edu
Piek Vossen, piek.Vossen@vu.nl

Local Organizing Chair:

Francis Bond, Luís Morgado da Costa, František Kratochvíl, Takayuki Kuribayashi

Call for Student Assistants

We’re hiring academic assistants!

Are you a Master Student in Linguistics, Computer Science, AI or Communication Science? Do you want to get paid for working in an exciting research project that combines research strengths from different disciplines?

We are always looking for talented students for projects involving computational linguistics, computer science and communication science. Positions are for 1 day per week during the academic year 2017-2018.

Here you can recent annotation projects at CLTL to get an idea:  Annotation projects

2017-2018_Call_for_Student_Assistants_FB

The projects that are now looking for students:

If you are interested but want to know more about the possible projects and what to do please get in touch with Chantal van Son. Otherwise, send a motivation letter and CV to the contact person for each project.

Preferred knowledge and skills are:

  • strong background in linguistics and affinity with technology (programming skills are a plus), or;
  • strong technological background and an interest in language technology.
  • some projects require knowledge of Dutch, some good understanding of English

Why you should apply:

  • You will be taking part in a real research project and become knowledgeable about the research field
  • You will be collaborating with fellow students and researchers and learn how to do interdisciplinary research;
  • Topics of interest can be used for term paper and thesis;
  • You might even have the chance to publish a paper and attend a conference;
  • The work hours are flexible;
  • An excellent opportunity to boost your CV.
  • And you get paid!!!

Call for VU University Research Fellow 2017-2018

Apply for University Research Fellow 2017-2018
Deadline Friday 30 June 2017

Who makes our robots talk?

Who takes up this challenge and the exciting opportunity to work in an inspiring research group that is among the best in the world in the area of natural language understanding?

Spinoza prize winner Prof. dr. Piek Vossen has the honour to invite you to apply for the position of University Research Fellow for the academic year 2017-2018. As a University Research Fellow, you work for one year one day a week on a prestigious research project within the research group of Prof. Vossen: the Computational Lexicology and Terminology Lab (CLTL).Call for VU University Research Fellow 2017-2018Humanoid robots: Pepper by Aldebaran Robotics and SoftBank, and NAO by Aldebaran Robotics.

We recently bought a robot and now want to you to plug in our natural language processing technology so that the robot can respond to people in an intelligent way. If you are a wise girl or wise guy and you are interested in Artificial Intelligence, Natural Language Processing and robotics, then you are the perfect candidate to turn our robots into wise bots.

You will work with a real Pepper or NAO robot. The programming environment is Choregraphe and some programming skills in Python are recommended.

As an URF, you will have the chance to publish a paper and attend a conference. It is also an honorable position that looks great on your CV. You will work with PhD students and PostDocs that do exciting work in the area of natural language understanding. There is an opportunity to present a talking robot at the Weekend of Science (“Weekend van de Wetenschap”) to a general audience and basic school kids in October and your robot can be present with you at the opening of the new Computer Science building in 2018.

When you win the prize your activities will be funded for one day a week for one year starting September 2017.

Piek Vossen appointed Pia Sommerauer as VU Fellow for the 2016-2017 academic year.Piek Vossen appointed Pia Sommerauer as VU Fellow for the 2016-2017 academic year.

If you are interested, send an email to Selene Kolman by Friday 30 June 2017, listing:

— a brief motivation
— your interests and ideas related to Natural Language Processing and robotics
— your (Python) programming skills
— your undergraduate degree
— the master courses you have taken and intend to take
— your list of grades

For more information visit websites below or contact:
Prof. dr. Piek Vossen
Selene Kolman

Further information on VU University Research Fellowship (URF)

Prof. dr. Piek Vossen

Professor Computational Lexicology
Language, Literature and Communication
Faculty of Humanities, VU University
de Boelelaan 1105, 1081 HV Amsterdam, The Netherlands

VU University Research Fellow 2015-2016 Soufyan BelkaidPiek Vossen appointed Soufyan Belkaid as VU Fellow for the 2015-2016 academic year.

Piek Vossen appointed Chantal van Son as VU University Research Fellow for the 2014-2015 academic yearPiek Vossen appointed Chantal van Son as VU Fellow for the 2014-2015 academic year.

Controversy in Web Data — ADS Coffee & Data

ADS Coffee & Data: Controversy in Web Data
by Amsterdam Data Science
Screen Shot 2017-06-07 at 15.57.48

Date: Friday 09 June
Time: 0900-1100

Location: VU Amsterdam, HG-16A00 Kerkzaal, 16th floor main building VU
De Boelelaan, Amsterdam, Nederland

Overview: This edition of the ADS meetup will focus on the topic of “How to deal with controversy, bias, quality and opinions on the Web” and will be organised in the context of the COMMIT/ ControCurator project, in which VU and UvA computer scientists and humanities researchers investigate jointly the computational modeling of controversial issues on the Web, and explore its application within real use cases in existing organisational pipelines, e.g. Crowdynews and Netherlands Institute for Sound and Vision.
09:00-09:10 Coffee

Introduction & Chair by Lora Aroyo, Full Professor at the Web & Media group, VU Computer Science

09:10-9:20: Kaspar Beelen – Detecting Controversies in Online News Media (UvA, Faculty of Humanities)

09:20-09:30: Benjamin Timmermans – Understanding Controversy Using Collective Intelligence (VU, Computer Science)

09:30-09:45: Gerben van Eerten – Crowdynews deploying ControCurator

09:45-10:00: Davide Ceolin – (VU, Computer Science)

10:00-10:15: Damian Trilling – (UvA, Faculty of Social and Behavioural Sciences)

10:15-10:30: Daan Oodijk (Blendle)

10:30-10:45: Andy Tanenbaum – “Skewing the data”

10:45-11:00: Q&A Coffee

Registration & further information: https://www.meetup.com/Amsterdam-Data-Science/events/239903981/

Minh Le and Antske Fokkens’ long paper accepted for EACL 2017

Title: Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

Conference: EACL 2017 (European Chapter of the Association for Computational Linguistics), at Valencia, 3-7 April 2017.

Authors: Minh Le and Antske Fokkens Title: Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency ParsingTackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing by Minh Le and Antske Fokkens

Abstract:
Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its efficiency. We investigate the portion of errors which are the result of error propagation and confirm that reinforcement learning reduces the occurrence of error propagation.

Team CLARIAH wins Audience Award at Hackalod 2016

(A version of this post previously appeared on http://www.clariah.nl/en/new/blogs/575-team-clariah-wins-audience-award-at-hackalod-2016)

It all seemed rather funny to them, until the very moment they laid eyes upon the prison block. As ‘Team Clariah’ Marieke van Erp (VU, WP3) and Richard Zijdeman (IISG, WP4) participated in the National Library’s HackaLOD on 11-12 November. Alongside seven other teams they faced the challenge of building a cool (prototype) application using Linked Open Data made especially available for this event, by the National Library and Heritage partners. It had to be done within 24 hours… Inside a former prison… Here’s their account of the event.

We set out on Friday, somewhat dispirited as our third team mate Melvin Wevers (UU) was caught out by a cold. Upon arrival, it turned out we had two cells: one for hacking and one for sleeping (well more like for a three-hour tossing and turning). As you’d expect, the cells were not exactly cosy, but the organisers had provided goodie bags from which the contents were put to good use and even a Jaw Harp midnight concert.

With that, and our pre-set up plan to tell stories around buildings we set out to build our killer app. We found several datasets that contain information about buildings. The BAG for example contains addresses, geo-coordinates and information about how a building is used (as a shop or a gathering place) and ‘mutations’ (things that happened to the building). However, what it doesn’t contain is building names (for example Rijksmuseum or Wolvenburg), which is contained in the Rijksmonumenten dataset. But the Rijksmonumenten dataset doesn’t contain addresses, but as both contain geo-coordinates, they can be linked. Yay for Linked Data!

To tell the stories, we wanted to find some more information in the National Library’s newspaper collection. With some help from other hackers we managed to efficiently bring up news articles that mention a particular location. With some manual analysis, we for example found that for Kloveniersburgwal 73 up until 1890 there was a steady stream of ads asking for ‘decent’ kitchen maids, followed by a sudden spike in ads announcing real estate. It turns out a notary had moved in, for which another (not linked) dataset could also provide a marriage license, confirmed by a wedding ad in the newspaper. These sort of stories can give us more insight into what happened in a particular building at a given time.

We have made some steps in starting to analyse these ads automatically to detect these changes in order to automatically generate timelines for locations, but we didn’t get that done in 24 hours. However, the audience was sufficiently pleased with our idea for us to win the audience award! (Admittedly to our great surprise, as the other teams’ ideas were all really awesome as well). We’re now looking for funding to complete the prototype.

In summary, it was all great fun, not in the least due to great organisation by the National Library as well as the nice ‘bonding’ atmosphere among the teams. So, our lessons learnt:

  • prison food is really not that bad (and there was lots of it)
  • 24 hours of hacking is heaps of fun
  • the data always turn out to behave different from what you’d expect
  • isolated from the daily routine, events like these prove crucial to foster new ideas and relations, in order to keep the field in motion.

 

On Wednesday 16 November, CLTL member Marieke van Erp was also interviewed with Martijn Kleppe, one of the Hackalod organisers on Radio 1 about the hackathon. You can listen back to it here (in Dutch).

Papers accepted at COLING 2016

Two papers from our group have been accepted at the 26th International Conference on Computational Linguistics COLING 2016, at Osaka, Japan, from 11th to 16th December 2016.

sushi_COLING

Semantic overfitting: what ‘world’ do we consider when evaluating disambiguation of text? by Filip Ilievski, Marten Postma and Piek Vossen

Abstract
Semantic text processing faces the challenge of defining the relation between lexical expressions and the world to which they make reference within a period of time. It is unclear whether the current test sets used to evaluate disambiguation tasks are representative for the full complexity considering this time-anchored relation, resulting in semantic overfitting to a specific period and the frequent phenomena within.
We conceptualize and formalize a set of metrics which evaluate this complexity of datasets. We provide evidence for their applicability on five different disambiguation tasks. Finally, we propose a time-based, metric-aware method for developing datasets in a systematic and semi-automated manner.

More is not always better: balancing sense distributions for all-words Word Sense Disambiguation by Marten Postma, Ruben Izquierdo and Piek Vossen

Abstract
Current Word Sense Disambiguation systems show an extremely low performance on low frequent senses, which is mainly caused by the difference in sense distributions between training and test data. The main focus in tackling this problem has been on acquiring more data or selecting a single predominant sense and not necessarily on the meta properties of the data itself. We demonstrate that these properties, such as the volume, provenance and balancing, play an important role with respect to system performance. In this paper, we describe a set of experiments to analyze these meta properties in the framework of a state-of-the-art WSD system when evaluated on the SemEval-2013 English all-words dataset. We show that volume and provenance are indeed important, but that perfect balancing of the selected training data leads to an improvement of 21 points and exceeds state-of-the-art systems by 14 points while using only simple features. We therefore conclude that unsupervised acquisition of training data should be guided by strategies aimed at matching meta-properties.

LREC2016

CLTL papers, oral presentations, poster & demo sessions at LREC2016: 10th edition of the Language Resources and Evaluation Conference, 23-28 May 2016, Portorož (Slovenia)

LREC2016 Language Resources and Evaluation Conference, 23-28 May 2016, Portorož Slovenia

LREC2016 Conference Programme

Monday 23 May 2016

11.00 – 11.45 (Session 2: Lightning talks part II)
Multilingual Event Detection using the NewsReader Pipelines”, by Agerri R, I. Aldabe, E. Laparra, G. Rigau, A. Fokkens, P. Huijgen, R. Izquierdo, M. van Erp, Piek Vossen, A. Minard, B. Magnini

Abstract
We describe a novel modular system for cross-lingual event extraction for English, Spanish,, Dutch and Italian texts. The system consists of a ready-to-use modular set of advanced multilingual Natural Language Processing (NLP) tools. The pipeline integrates modules for basic NLP processing as well as more advanced tasks such as cross-lingual Named Entity Linking, Semantic Role Labeling and time normalization. Thus, our cross-lingual framework allows for the interoperable semantic interpretation of events, participants, locations and time, as well as the relations between them.

Tuesday 24 May 2016

09:15 – 10:30 Oral Session 1
Stereotyping and Bias in the Flickr30k Dataset” by Emiel van Miltenburg

Abstract
An untested assumption behind the crowdsourced descriptions of the images in the Flickr30k dataset (Young et al., 2014) is that they “focus only on the information that can be obtained from the image alone” (Hodosh et al., 2013, p. 859). This paper presents some evidence against this assumption, and provides a list of biases and unwarranted inferences that can be found in the Flickr30k dataset. Finally, it considers methods to find examples of these, and discusses how we should deal with stereotypedriven descriptions in future applications.

Day 1, Wednesday 25 May 2016

11:35 – 13:15 Area 1 – P04 Information Extraction and Retrieval
NLP and public engagement: The case of the Italian School Reform” by Tommaso Caselli, Giovanni Moretti, Rachele Sprugnoli, Sara Tonelli, Damien Lanfrey and Donatella Solda Kutzman

Abstract
In this paper we present PIERINO (PIattaforma per l’Estrazione e il Recupero di INformazione Online), a system that was implemented in collaboration with the Italian Ministry of Education, University and Research to analyse the citizens’ comments given in #labuonascuola survey. The platform includes various levels of automatic analysis such as key-concept extraction and word co-occurrences. Each analysis is displayed through an intuitive view using different types of visualizations, for example radar charts and sunburst. PIERINO was effectively used to support shaping the last Italian school reform, proving the potential of NLP in the context of policy making.

15:05 – 16:05 Emerald 2 – O8 Named Entity Recognition
Context-enhanced Adaptive Entity Linking” by Giuseppe Rizzo, Filip Ilievski, Marieke van Erp, Julien Plu and Raphael Troncy

Abstract
More and more knowledge bases are publicly available as linked data. Since these knowledge bases contain structured descriptions of real-world entities, they can be exploited by entity linking systems that anchor entity mentions from text to the most relevant resources describing those entities. In this paper, we investigate adaptation of the entity linking task using contextual knowledge. The key intuition is that entity linking can be customized depending on the textual content, as well as on the application that would make use of the extracted information. We present an adaptive approach that relies on contextual knowledge from text to enhance the performance of ADEL, a hybrid linguistic and graph-based entity linking system. We evaluate our approach on a domain-specific corpus consisting of annotated WikiNews articles.

16:45 – 18:05 – Area 1 – P12
GRaSP: A multi-layered annotation scheme for perspectives” by Chantal van Son, Tommaso Caselli, Antske Fokkens, Isa Maks, Roser Morante, Lora Aroyo and Piek Vossen

Abstract / Poster
This paper presents a framework and methodology for the annotation of perspectives in text. In the last decade, different aspects of linguistic encoding of perspectives have been targeted as separated phenomena through different annotation initiatives. We propose an annotation scheme that integrates these different phenomena. We use a multilayered annotation approach, splitting the annotation of different aspects of perspectives into small subsequent subtasks in order to reduce the complexity of the task and to better monitor interactions between layers. Currently, we have included four layers of perspective annotation: events, attribution, factuality and opinion. The annotations are integrated in a formal model called GRaSP, which provides the means to represent instances (e.g. events, entities) and propositions in the (real or assumed) world in relation to their mentions in text. Then, the relation between the source and target of a perspective is characterized by means of perspective annotations. This enables us to place alternative perspectives on the same entity, event or proposition next to each other.

18:10 – 19:10 – Area 2 – P16 Ontologies
The Event and Implied Situation Ontology: Application and Evaluation” by Roxane Segers, Marco Rospocher, Piek Vossen, Egoitz Laparra, German Rigau, Anne-Lyse Minard

Abstract / Poster
This paper presents the Event and Implied Situation Ontology (ESO), a manually constructed resource which formalizes the pre and post situations of events and the roles of the entities affected by an event. The ontology is built on top of existing resources such as WordNet, SUMO and FrameNet. The ontology is injected to the Predicate Matrix, a resource that integrates predicate and role information from amongst others FrameNet, VerbNet, PropBank, NomBank and WordNet. We illustrate how these resources are used on large document collections to detect information that otherwise would have remained implicit. The ontology is evaluated on two aspects: recall and precision based on a manually annotated corpus and secondly, on the quality of the knowledge inferred by the situation assertions in the ontology. Evaluation results on the quality of the system show that 50% of the events typed and enriched with ESO assertions are correct.

Day 2, Thursday 26 May 2016

10.25 – 10.45 – O20
Addressing the MFS bias in WSD systems” by Marten Postma, Ruben Izquierdo, Eneko Agirre, German Rigau and Piek Vossen

Abstract
This paper presents a framework and methodology for the annotation of perspectives in text. In the last decade, different aspects of linguistic encoding of perspectives have been targeted as separated phenomena through different annotation initiatives. We propose an annotation scheme that integrates these different phenomena. We use a multilayered annotation approach, splitting the annotation of different aspects of perspectives into small subsequent subtasks in order to reduce the complexity of the task and to better monitor interactions between layers. Currently, we have included four layers of perspective annotation: events, attribution, factuality and opinion. The annotations are integrated in a formal model called GRaSP, which provides the means to represent instances (e.g. events, entities) and propositions in the (real or assumed) world in relation to their mentions in text. Then, the relation between the source and target of a perspective is characterized by means of perspective annotations. This enables us to place alternative perspectives on the same entity, event or proposition next to each other.

11.45 – 13.05 – Area 2 – P25
The VU Sound Corpus: Adding more fine-grained annotations to the Freesound database” by Emiel van Miltenburg, Benjamin Timmermans and Lora Aroyo

Day 3, Friday 27 May 2016

10.45 – 11.05 – O38
Temporal Information Annotation: Crowd vs. Experts” by Tommaso Caselli, Rachele Sprugnoli and Oana Inel

Abstract
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, i.e., English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.

12.45 – 13.05 – O42
Crowdsourcing Salient Information from News and Tweets” by Oana Inel, Tommaso Caselli and Lora Aroyo

Abstract
The increasing streams of information pose challenges to both humans and machines. On the one hand, humans need to identify relevant information and consume only the information that lies at their interests. On the other hand, machines need to understand the information that is published in online data streams and generate concise and meaningful overviews. We consider events as prime factors to query for information and generate meaningful context. The focus of this paper is to acquire empirical insights for identifying salience features in tweets and news about a target event, i.e., the event of “whaling”. We first derive a methodology to identify such features by building up a knowledge space of the event enriched with relevant phrases, sentiments and ranked by their novelty. We applied this methodology on tweets and we have performed preliminary work towards adapting it to news articles. Our results show that crowdsourcing text relevance, sentiments and novelty (1) can be a main step in identifying salient information, and (2) provides a deeper and more precise understanding of the data at hand compared to state-of-the-art approaches.

14:55 – 16:15 – Area 2- P54
Two architectures for parallel processing for huge amounts of text” by Mathijs Kattenberg, Zuhaitz Beloki, Aitor Soroa, Xabier Artola, Antske Fokkens, Paul Huygen and Kees Verstoep

Abstract
This paper presents two alternative NLP architectures to analyze massive amounts of documents, using parallel processing. The two architectures focus on different processing scenarios, namely batch-processing and streaming processing. The batch-processing scenario aims at optimizing the overall throughput of the system, i.e., minimizing the overall time spent on processing all documents. The streaming architecture aims to minimize the time to process real-time incoming documents
and is therefore especially suitable for live feeds. The paper presents experiments with both architectures, and reports the overall gain when they are used for batch as well as for streaming processing. All the software described in the paper is publicly available under free licenses.

14:55 – 15:15 Emerald 1 – O47
Evaluating Entity Linking: An Analysis of Current Benchmark Datasets and a Roadmap for Doing a Better Job” by Marieke van Erp, Pablo Mendes, Heiko Paulheim, Filip Ilievski, Julien Plu, Giuseppe Rizzo and Joerg Waitelonis

Abstract
Entity linking has become a popular task in both natural language processing and semantic web communities. However, we find that the benchmark datasets for entity linking tasks do not accurately evaluate entity linking systems. In this paper, we aim to chart the strengths and weaknesses of current benchmark datasets and sketch a roadmap for the community to devise better benchmark datasets.

15.35 – 15.55 – O48
MEANTIME, the NewsReader Multilingual Event and Time Corpus” by Anne-Lyse Minard, Manuela Speranza, Ruben Urizar, Begoña Altuna, Marieke van Erp, Anneleen Schoen and Chantal van Son

Abstract
In this paper, we present the NewsReader MEANTIME corpus, a semantically annotated corpus of Wikinews articles. The corpus consists of 480 news articles, i.e. 120 English news articles and their translations in Spanish, Italian, and Dutch. MEANTIME contains annotations at different levels. The document-level annotation includes markables (e.g. entity mentions, event mentions, time expressions, and numerical expressions), relations between markables (modeling, for example, temporal information and semantic role labeling), and entity and event intra-document coreference. The corpus-level annotation includes entity and event cross-document coreference. Semantic annotation on the English section was performed manually; for the annotation in Italian, Spanish, and (partially) Dutch, a procedure was devised to automatically project the annotations on the English texts onto the translated texts, based on the manual alignment of the annotated elements; this enabled us not only to speed up the annotation process but also provided cross-lingual coreference. The English section of the corpus was extended with timeline annotations for the SemEval 2015 TimeLine shared task. The First CLIN Dutch Shared Task at CLIN26 was based on the Dutch section, while the EVALITA 2016 FactA (Event Factuality Annotation) shared task, based on the Italian section, is currently being organized.