Category Archives: Natural language understanding

Team CLARIAH wins Audience Award at Hackalod 2016

(A version of this post previously appeared on http://www.clariah.nl/en/new/blogs/575-team-clariah-wins-audience-award-at-hackalod-2016)

It all seemed rather funny to them, until the very moment they laid eyes upon the prison block. As ‘Team Clariah’ Marieke van Erp (VU, WP3) and Richard Zijdeman (IISG, WP4) participated in the National Library’s HackaLOD on 11-12 November. Alongside seven other teams they faced the challenge of building a cool (prototype) application using Linked Open Data made especially available for this event, by the National Library and Heritage partners. It had to be done within 24 hours… Inside a former prison… Here’s their account of the event.

We set out on Friday, somewhat dispirited as our third team mate Melvin Wevers (UU) was caught out by a cold. Upon arrival, it turned out we had two cells: one for hacking and one for sleeping (well more like for a three-hour tossing and turning). As you’d expect, the cells were not exactly cosy, but the organisers had provided goodie bags from which the contents were put to good use and even a Jaw Harp midnight concert.

With that, and our pre-set up plan to tell stories around buildings we set out to build our killer app. We found several datasets that contain information about buildings. The BAG for example contains addresses, geo-coordinates and information about how a building is used (as a shop or a gathering place) and ‘mutations’ (things that happened to the building). However, what it doesn’t contain is building names (for example Rijksmuseum or Wolvenburg), which is contained in the Rijksmonumenten dataset. But the Rijksmonumenten dataset doesn’t contain addresses, but as both contain geo-coordinates, they can be linked. Yay for Linked Data!

To tell the stories, we wanted to find some more information in the National Library’s newspaper collection. With some help from other hackers we managed to efficiently bring up news articles that mention a particular location. With some manual analysis, we for example found that for Kloveniersburgwal 73 up until 1890 there was a steady stream of ads asking for ‘decent’ kitchen maids, followed by a sudden spike in ads announcing real estate. It turns out a notary had moved in, for which another (not linked) dataset could also provide a marriage license, confirmed by a wedding ad in the newspaper. These sort of stories can give us more insight into what happened in a particular building at a given time.

We have made some steps in starting to analyse these ads automatically to detect these changes in order to automatically generate timelines for locations, but we didn’t get that done in 24 hours. However, the audience was sufficiently pleased with our idea for us to win the audience award! (Admittedly to our great surprise, as the other teams’ ideas were all really awesome as well). We’re now looking for funding to complete the prototype.

In summary, it was all great fun, not in the least due to great organisation by the National Library as well as the nice ‘bonding’ atmosphere among the teams. So, our lessons learnt:

  • prison food is really not that bad (and there was lots of it)
  • 24 hours of hacking is heaps of fun
  • the data always turn out to behave different from what you’d expect
  • isolated from the daily routine, events like these prove crucial to foster new ideas and relations, in order to keep the field in motion.

 

On Wednesday 16 November, CLTL member Marieke van Erp was also interviewed with Martijn Kleppe, one of the Hackalod organisers on Radio 1 about the hackathon. You can listen back to it here (in Dutch).

Papers accepted at COLING 2016

Two papers from our group have been accepted at the 26th International Conference on Computational Linguistics COLING 2016, at Osaka, Japan, from 11th to 16th December 2016.

sushi_COLING

Semantic overfitting: what ‘world’ do we consider when evaluating disambiguation of text? by Filip Ilievski, Marten Postma and Piek Vossen

Abstract
Semantic text processing faces the challenge of defining the relation between lexical expressions and the world to which they make reference within a period of time. It is unclear whether the current test sets used to evaluate disambiguation tasks are representative for the full complexity considering this time-anchored relation, resulting in semantic overfitting to a specific period and the frequent phenomena within.
We conceptualize and formalize a set of metrics which evaluate this complexity of datasets. We provide evidence for their applicability on five different disambiguation tasks. Finally, we propose a time-based, metric-aware method for developing datasets in a systematic and semi-automated manner.

More is not always better: balancing sense distributions for all-words Word Sense Disambiguation by Marten Postma, Ruben Izquierdo and Piek Vossen

Abstract
Current Word Sense Disambiguation systems show an extremely low performance on low frequent senses, which is mainly caused by the difference in sense distributions between training and test data. The main focus in tackling this problem has been on acquiring more data or selecting a single predominant sense and not necessarily on the meta properties of the data itself. We demonstrate that these properties, such as the volume, provenance and balancing, play an important role with respect to system performance. In this paper, we describe a set of experiments to analyze these meta properties in the framework of a state-of-the-art WSD system when evaluated on the SemEval-2013 English all-words dataset. We show that volume and provenance are indeed important, but that perfect balancing of the selected training data leads to an improvement of 21 points and exceeds state-of-the-art systems by 14 points while using only simple features. We therefore conclude that unsupervised acquisition of training data should be guided by strategies aimed at matching meta-properties.

VU Master’s Day, Mar. 12 2016

Visit our Research Master Linguistic Engineering at VU Master’s Day Saturday 12 March 2016
flyer Research Master Linguistic EngineeringFlyer Linguistic Engineering, Specialization of the Research Master Linguistics.
Overview Courses Linguistic Engineering.

On 12 March 2016 you will have the opportunity to visit the Master’s Day and obtain detailed information on our Research Master Linguistic Engineering, Specialization of the Research Master Linguistics.

Date Saturday, 12 March 2016
Time 9.30 am – 2.30 pm
Target Group Higher education students and professionals
Location Main Building, VU University Amsterdam, De Boelelaan 1105 (directions)
Please note Preregistration is open until 12.00 pm on Friday 11 March

Programme

VU_Masters_Day

Specialization ‘Linguistic Engineering’ 2017—2018

Linguistic Engineering is a specialization in the Research Master Linguistics at VU Amsterdam. More details on the: Programme, Admission and Application.

Overview Courses Research Master Specialization: Linguistic Engineering
Overview Courses Research Master Linguistic Engineering, in Research Master LinguisticsView/download flyer Research Master Linguistic Engineering. Programme, admission and application.

Language technology is a rapidly developing field of research. In humanistic research nowadays a firm background in language technology is extremely valuable in the context of manipulating large datasets. The Computational Lexicology and Terminology Lab (CLTL) offers a specialization in the research master Linguistics in which students are trained as linguistic engineer. A linguistic engineer has knowledge of language technology as used in computer applications (e.g. search engines) and of the relevant linguistics.

WHY STUDY AT VU AMSTERDAM?
• The Computational Lexicology and Terminology Lab (CLTL) is one of the world’s leading research institutes in Linguistic Engineering.
• Prof. Dr. Piek Vossen, winner of NWO Spinoza Prize, is leading the group of researchers and several national and international interdisciplinary projects, including the Spinoza project ‘Understanding Language by Machines’.
• Become part of an international group of researchers at Vrije Universiteit Amsterdam!

CAREER PROSPECTS
You can set up your own field of research as a PhD student or you can embark on a career at a research institute. Other opportunities are in the industry, which is in need of linguists with a technical background. Being a graduate of the CLTL will certainly enhance your chances.

flyer Research Master Linguistic Engineering

ADMISSION REQUIREMENTS
• Applicants must have at least a Bachelor’s degree in Linguistics, Artificial Intelligence or comparable Bachelor programme.
• Applicants who do not meet the requirement(s) are also encouraged to apply, provided that they have a sound academic background and a demonstrated interest in and knowledge of engineering and/or linguistics.

SPECIALIZATION: LINGUISTIC ENGINEERING
IN RESEARCH MASTER: LINGUISTICS
LANGUAGE: ENGLISH
DURATION: 2 YEARS FULLTIME
DEADLINE: APRIL 1 2016 (NON-EU), JUNE 1 2016 FOR DUTCH AND EU STUDENTS

For more details on the programme, admission and application:
WWW.FGW.VU.NL
WWW.VU.NL/MA-LINGUISTICS
Dr. H. D. van der Vliet: +31 (0)20 598 6466
EMAIL: Dr. H. D. van der Vliet

Computational Lexicology and Terminology Lab (CLTL)
Language, Literature and Communication
Faculty of Humanities
VU Amsterdam
de Boelelaan 1105
1081 HV Amsterdam
The Netherlands

General information on the Research Master’s in Linguistics at VU Amsterdam.

Master’s Evening, Dec. 01 2015

Master’s Evening 1 december 2015

On Tuesday 1 December 2015 you can visit our Master’s Evening where you can get informed about most of our Master’s degree programmes during information sessions. Please register and choose which of these sessions you would like to attend.

Date: Tuesday 1 December 2015
Time: 17:00 – 20:30
For whom: Higher education students and professionals
Location: Main building VU Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam

Please find information on our Research Master Specialization ‘Linguistic Engineering’

If you are not able to attend our Master’s Evening on 1 December 2015, you can visit the Master’s Day on Saturday 12 March 2016 or find out more about VU Amsterdam and our study programmes:
• Find your international Master’s degree programme and contact the coordinator for questions
Meet VU Amsterdam representatives in your own country
Visit our international students Facebook

Press release VU University on NewsReader: Join the hackathon!

VU-hoogleraar en Spinozawinnaar Piek Vossen presenteert NewsReader
Ontdek ook zelf deze nieuwe technologie die het nieuws leest

In 2013 startte Piek Vossen (hoogleraar computationele lexicologie), samen met onderzoekers in Trento en San Sebastian en met de bedrijven LexisNexis (NL), SynerScope (NL) en het Engelse ScraperWiki het NewsReader project om een computerprogramma te ontwikkelen dat dagelijks het nieuws ‘leest’ en precies bijhoudt wat, wanneer, waar gebeurd is in de wereld en wie er bij betrokken is. Het project kreeg hiervoor 2,8 miljoen euro subsidie van de Europese Commissie.

SynerScope‘s visualization: extraction from 1.26M news articles

Nieuws lezen in vier talen
In afgelopen 3 jaar hebben de onderzoekers een technologie ontwikkeld om de computer automatisch het nieuws te laten lezen in vier talen. Uit miljoenen krantenartikelen is nu een doorzoekbare database gemaakt waarin duplicaten zijn ontdubbeld, complementerende informatie uit verschillende berichten op een slimme manier samengevoegd is en is de informatie verrijkt met fijnmazige types zodat je niet alleen op persoonsnamen zoals ‘Mark Rutte’ en `Diederik Samsom’ kunt zoeken, maar ook op entiteiten van het type ‘politicus’.

Presentatie NewsReader
Op dinsdagmiddag 24 november 2015 organiseert de onderzoeksgroep Computational Lexicology & Terminology Lab (CLTL) van Piek Vossen een workshop waarin de eindresultaten van het project gepresenteerd worden. Ook zijn er diverse sprekers die hun visie op het project geven, zoals VU-hoogleraar Frank van Harmelen (Knowledge Representation & Reasoning), Bernardo Magnini, onderzoeker bij FBK in Trento en Sybren Kooistra, data journalist bij de Volkskrant en medebedenker en hoofredacteur van Yournalism, het platform voor onderzoeksjournalistiek.

Doe mee met de Hackathon!
Op 25 november kunnen gebruikers zelf aan de slag met de nieuwsdatabase die is opgebouwd uit miljoenen krantenartikelen. Meer informatie en aanmelden.

VENI grant for Antske Fokkens

Antske Fokkens received a VENI grant for her proposal Reading between the lines. The project aims at identifying so-called implicit perspectives in text.

Perspectives are conveyed in many ways. Explicit opinions or highly subjective terms are easily identified. However, perspectives are also expressed more subtly. For instance, Nick Wing argues that media describe white suspects (e.g. brilliant, athletic) more positively than black victims (e.g. gang member, drug problems). Ivar Vermeulen (p.c.) observes in a small Dutch corpus that Moroccan perpetrators are easily called thieves (implying generic behavior), where other perpetrators from Dutch only stole something (implying incidental behavior). These observations are anecdotal, but reveal how choices concerning what information to include or how to describe someone’s role may display a specific perspective.

This project will investigate how linguistic analyses may be used to identify these more implicit ways of expressing perspectives in text. This research will be carried out in three stages: First, large scale corpus analyses will be applied to identify distributions of semantic roles (what entities do) and other properties assigned to them (their characteristics). In the second stage, generic participants will be linked to the semantic role they imply (e.g. a thief will be linked to the perpetrator of stealing). With these links, we can investigate whether thieves are described differently from people who steal. In the third stage, emotion and sentiment lexica will be used to identify the sentiment associated with descriptions of people enabling research that investigates whether people are depicted positively or negatively.

The research is carried out in the context of digital humanities and social sciences. Evaluation and experimental setup will be guided towards identifying differences in perspective between sources. In addition to correctness of linguistic analyses (intrinsic evaluation), the possibility of using the method for identifying changes in perspective over time (historic research) or differences in perspective between sources (communication science) will be investigated.

NWO Project granted: CLARIAH

CLARIAH (Common Lab Research Infrastructure for the Arts and Humanities) project granted by NWO in the National Roadmap for Large-Scale Research Facilities programme.

CLARIAH is developing a digital infrastructure that brings together large collections of data and software from different humanities disciplines. This will enable humanities researchers, from historians, literature experts and archaeologists to linguists, speech technologists and media scientists – to investigate cross-disciplinary questions, for example about culture and societal change. CLARIAH has received 12 million euros for the development of research instruments and the training of scientists. This project is vitally important for the development of the humanities in the Netherlands; a digital revolution is taking place that will drastically change how humanities research is done. The potential societal impact of this is also considerable.

CLARIAH (Common Lab Research Infrastructure for the Arts and Humanities)
CLARIAH: BIG DATA, GRAND CHALLENGES background article (pdf).

Organisations involved (applicants): Huygens ING, International Institute for Social History, Meertens Institute, Netherlands Institute for Sound and Vision, DANS, Radboud University Nijmegen, Utrecht University, University of Amsterdam and VU University Amsterdam. Project leader: Prof. A.F. (Lex) Heerma van Voss.

Piek Vossen part of core team CLARIAH Prof. dr. Piek Vossen is part of the CLARIAH core team.

CLARIAH | NWO-programma Nationale Roadmap on YouTube (in Dutch)

CLARIAH kickoff 2015 on YouTube (in Dutch)

Release Open Source Dutch WordNet

Open Source Dutch Wordnet is a Dutch lexical semantic database.
OpenSourceDutch_Wordnet
Demo of Open Source Dutch WordNet. Release first version of the Open Dutch Wordnet (ODWN): December 02, 2014. By Marten Postma and Piek Vossen.

ODWN was created by removing the proprietary content from Cornetto (http://www2.let.vu.nl/oz/cltl/cornetto), and by using open source resources to replace this proprietary content.

Open Source Dutch WordNet contains 116,992 synsets, of which 95,356 originate from WordNet 3.0 and 21,636 synsets are new synsets. The number of English synsets without Dutch synonyms is 60,743, which means that 34,613 WordNet 3.0 synsets have been filled with at least one Dutch synonym.

The demo of Open Source Dutch WordNet can be inspected by go through these steps:
(1) Use as browser Google Chrome or Mozilla Firefox
(2) Go to https://debvisdic.let.vu.nl:9002/editor/
(3) login with: username: gast, password: gast
(4) click in the left box on ‘ODWN’ add click the ‘Add’ button
(5) click the button ‘Open dictionaries’ and inspect the resource

This project has been co-funded by the Nederlandse Taalunie (2013-2014).