Reducing grounded learning tasks to grammatical inference

Benjamin Börschinger*, Bevan K. Jones, Mark Johnson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

26 Citations (Scopus)

Abstract

It is often assumed that 'grounded' learning tasks are beyond the scope of grammatical inference techniques. In this paper, we show that the grounded task of learning a semantic parser from ambiguous training data as discussed in Kim and Mooney (2010) can be reduced to a Probabilistic Context-Free Grammar learning task in a way that gives state of the art results. We further show that additionally letting our model learn the language's canonical word order improves its performance and leads to the highest semantic parsing f-scores previously reported in the literature.

Original languageEnglish
Title of host publicationEMNLP 2011 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
Place of PublicationEdinburgh, UK
PublisherAssociation for Computational Linguistics (ACL)
Pages1416-1425
Number of pages10
ISBN (Print)1937284115, 9781937284114
Publication statusPublished - 2011
EventConference on Empirical Methods in Natural Language Processing, EMNLP 2011 - Edinburgh, United Kingdom
Duration: 27 Jul 201131 Jul 2011

Other

OtherConference on Empirical Methods in Natural Language Processing, EMNLP 2011
Country/TerritoryUnited Kingdom
CityEdinburgh
Period27/07/1131/07/11

Fingerprint

Dive into the research topics of 'Reducing grounded learning tasks to grammatical inference'. Together they form a unique fingerprint.

Cite this