Translation Evaluation. Following ▇▇▇▇▇ and Zong (2013) and ▇▇▇▇ et al. (2015), we evaluate our approach on domain formance on Chinese-English NIST datasets. The development set is NIST 2006 and the test set is NIST 2005. The evaluation metric is case- insensitive BLEU4 (▇▇▇▇▇▇▇▇ et al., 2002). We use the SRILM toolkit (▇▇▇▇▇▇▇, 2002) to train a 4-gram English language model on a monolingual corpus with 399M English words. Table 4 shows the results. At iteration 0, only the out-domain corpus is used and the BLEU score is 5.61. All methods iteratively extract parallel phrases from non-parallel corpora and enlarge the extracted parallel corpus. We find that agreement- based learning achieves much higher BLEU scores while obtains a smaller parallel corpus as com- pared with independent learning. One possible reason is that the agreement-based learning rules out most unlikely phrase pairs by encouraging consensus between two models.
Appears in 2 contracts
Sources: Research Paper, Research Paper
Translation Evaluation. Following ▇▇▇▇▇ and Zong (2013) and ▇▇▇▇ et al. (2015), we evaluate our approach on domain formance on Chinese-English NIST datasets. The development set is NIST 2006 and the test set is NIST 2005. The evaluation metric is case- insensitive BLEU4 (▇▇▇▇▇▇▇▇ et al., 2002). We use the SRILM toolkit (▇▇▇▇▇▇▇, 2002) to train a 4-gram English language model on a monolingual corpus with 399M English words. Table 4 shows the results. At iteration 0, only the out-domain corpus is used and the BLEU score s- core is 5.61. All methods iteratively extract parallel par- allel phrases from non-parallel corpora and enlarge en- large the extracted parallel corpus. We find that agreement- agreement-based learning achieves much higher BLEU scores while obtains a smaller parallel corpus cor- pus as com- pared compared with independent learning. One possible reason is that the agreement-based learning learn- ing rules out most unlikely phrase pairs by encouraging encour- aging consensus between two models.
Appears in 1 contract
Sources: Research Paper