[試題] 104下 陳信希 自然語言處理 期中考

作者: madeformylov (睡覺治百病)   2016-07-04 19:05:03
課程名稱︰自然語言處理
課程性質︰系內選修
課程教師︰陳信希
開課學院:電資學院
開課系所︰資訊工程學系
考試日期(年月日)︰2016/04/21
考試時限(分鐘):180 mins
試題 :
01. Machine translation (MT) is one of practical NLP applications. The
development of MT systems has a long history, but still has space to
improve. Please address two linguistic phenomena to explain why
MT systems are challenging. (10pts)
02. An NLP system can be implemented in a pipeline, including modules of
morphological processing, syntactic analysis, semantic interpretation
and context analysis. Please use the following news story to describe
the concepts behind. You are asked to mention one task in each module.
(10pts)
這場地震可能影響日相安倍晉三的施政計畫。安倍十八日說,消費睡調漲的
計畫不會改變。
03. Ambiguity is inherent in natural language. Please describe why ambiguity
may happen in each of the following cases. (10pts)
(a) Prepositional phrase attachment.
(b) Noun-noun compound.
(c) Word: bass
04. Why the extraction of multiword expressions is critical for NLP
applications? Please propose a method to check if an extracted multiword
expression meets the non-compositionality criterion, (10pts)
05. Mutual information and likelihood ratio are commonly used to find
collocations in a corpus. Please describe the ideas of these two methods.
(10pts)
06. Emoticons are commonly used in social media. They can be regarded as a
special vocabulary in a language. Emoticon understanding is helpful to
understand the utterances in an interaction. Please propose an "emoticon"
embedding approach to represent each emoticons as a vector, and find the
most 5 relevant words to each emoticon. (10pts)
07. To deal with unseen n-grams, smoothing techniques are adopted in
conventional language modeling approach. They are applied to n-grams to
reallocate probability mass from observed n-grams to unobserved n-grams,
producing better estimates for unseen data. Please show a smoothing
technique for the conventional language model, and discuss why neural
network language model (NNLM) can achieve better generalization for unseen
n-grams. (10pts)
08. In HMM learning, we aim at inferring the best model parameters, given a
skeletal model and an observation sequence. The following two equations
are related to compute the state transition probabilities.
Σ_{t=1}^{T-1} ξ_t(i, j)
\hat{a}_{ij} =

Links booklink

Contact Us: admin [ a t ] ucptt.com