ibm5.py 27 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665
  1. # -*- coding: utf-8 -*-
  2. # Natural Language Toolkit: IBM Model 5
  3. #
  4. # Copyright (C) 2001-2019 NLTK Project
  5. # Author: Tah Wei Hoon <hoon.tw@gmail.com>
  6. # URL: <http://nltk.org/>
  7. # For license information, see LICENSE.TXT
  8. """
  9. Translation model that keeps track of vacant positions in the target
  10. sentence to decide where to place translated words.
  11. Translation can be viewed as a process where each word in the source
  12. sentence is stepped through sequentially, generating translated words
  13. for each source word. The target sentence can be viewed as being made
  14. up of ``m`` empty slots initially, which gradually fill up as generated
  15. words are placed in them.
  16. Models 3 and 4 use distortion probabilities to decide how to place
  17. translated words. For simplicity, these models ignore the history of
  18. which slots have already been occupied with translated words.
  19. Consider the placement of the last translated word: there is only one
  20. empty slot left in the target sentence, so the distortion probability
  21. should be 1.0 for that position and 0.0 everywhere else. However, the
  22. distortion probabilities for Models 3 and 4 are set up such that all
  23. positions are under consideration.
  24. IBM Model 5 fixes this deficiency by accounting for occupied slots
  25. during translation. It introduces the vacancy function v(j), the number
  26. of vacancies up to, and including, position j in the target sentence.
  27. Terminology:
  28. Maximum vacancy:
  29. The number of valid slots that a word can be placed in.
  30. This is not necessarily the same as the number of vacant slots.
  31. For example, if a tablet contains more than one word, the head word
  32. cannot be placed at the last vacant slot because there will be no
  33. space for the other words in the tablet. The number of valid slots
  34. has to take into account the length of the tablet.
  35. Non-head words cannot be placed before the head word, so vacancies
  36. to the left of the head word are ignored.
  37. Vacancy difference:
  38. For a head word: (v(j) - v(center of previous cept))
  39. Can be positive or negative.
  40. For a non-head word: (v(j) - v(position of previously placed word))
  41. Always positive, because successive words in a tablet are assumed to
  42. appear to the right of the previous word.
  43. Positioning of target words fall under three cases:
  44. (1) Words generated by NULL are distributed uniformly
  45. (2) For a head word t, its position is modeled by the probability
  46. v_head(dv | max_v,word_class_t(t))
  47. (3) For a non-head word t, its position is modeled by the probability
  48. v_non_head(dv | max_v,word_class_t(t))
  49. dv and max_v are defined differently for head and non-head words.
  50. The EM algorithm used in Model 5 is:
  51. E step - In the training data, collect counts, weighted by prior
  52. probabilities.
  53. (a) count how many times a source language word is translated
  54. into a target language word
  55. (b) for a particular word class and maximum vacancy, count how
  56. many times a head word and the previous cept's center have
  57. a particular difference in number of vacancies
  58. (b) for a particular word class and maximum vacancy, count how
  59. many times a non-head word and the previous target word
  60. have a particular difference in number of vacancies
  61. (d) count how many times a source word is aligned to phi number
  62. of target words
  63. (e) count how many times NULL is aligned to a target word
  64. M step - Estimate new probabilities based on the counts from the E step
  65. Like Model 4, there are too many possible alignments to consider. Thus,
  66. a hill climbing approach is used to sample good candidates. In addition,
  67. pruning is used to weed out unlikely alignments based on Model 4 scores.
  68. Notations:
  69. i: Position in the source sentence
  70. Valid values are 0 (for NULL), 1, 2, ..., length of source sentence
  71. j: Position in the target sentence
  72. Valid values are 1, 2, ..., length of target sentence
  73. l: Number of words in the source sentence, excluding NULL
  74. m: Number of words in the target sentence
  75. s: A word in the source language
  76. t: A word in the target language
  77. phi: Fertility, the number of target words produced by a source word
  78. p1: Probability that a target word produced by a source word is
  79. accompanied by another target word that is aligned to NULL
  80. p0: 1 - p1
  81. max_v: Maximum vacancy
  82. dv: Vacancy difference, Δv
  83. The definition of v_head here differs from GIZA++, section 4.7 of
  84. [Brown et al., 1993], and [Koehn, 2010]. In the latter cases, v_head is
  85. v_head(v(j) | v(center of previous cept),max_v,word_class(t)).
  86. Here, we follow appendix B of [Brown et al., 1993] and combine v(j) with
  87. v(center of previous cept) to obtain dv:
  88. v_head(v(j) - v(center of previous cept) | max_v,word_class(t)).
  89. References:
  90. Philipp Koehn. 2010. Statistical Machine Translation.
  91. Cambridge University Press, New York.
  92. Peter E Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and
  93. Robert L. Mercer. 1993. The Mathematics of Statistical Machine
  94. Translation: Parameter Estimation. Computational Linguistics, 19 (2),
  95. 263-311.
  96. """
  97. from __future__ import division
  98. import warnings
  99. from collections import defaultdict
  100. from math import factorial
  101. from nltk.translate import AlignedSent
  102. from nltk.translate import Alignment
  103. from nltk.translate import IBMModel
  104. from nltk.translate import IBMModel4
  105. from nltk.translate.ibm_model import Counts
  106. from nltk.translate.ibm_model import longest_target_sentence_length
  107. class IBMModel5(IBMModel):
  108. """
  109. Translation model that keeps track of vacant positions in the target
  110. sentence to decide where to place translated words
  111. >>> bitext = []
  112. >>> bitext.append(AlignedSent(['klein', 'ist', 'das', 'haus'], ['the', 'house', 'is', 'small']))
  113. >>> bitext.append(AlignedSent(['das', 'haus', 'war', 'ja', 'groß'], ['the', 'house', 'was', 'big']))
  114. >>> bitext.append(AlignedSent(['das', 'buch', 'ist', 'ja', 'klein'], ['the', 'book', 'is', 'small']))
  115. >>> bitext.append(AlignedSent(['ein', 'haus', 'ist', 'klein'], ['a', 'house', 'is', 'small']))
  116. >>> bitext.append(AlignedSent(['das', 'haus'], ['the', 'house']))
  117. >>> bitext.append(AlignedSent(['das', 'buch'], ['the', 'book']))
  118. >>> bitext.append(AlignedSent(['ein', 'buch'], ['a', 'book']))
  119. >>> bitext.append(AlignedSent(['ich', 'fasse', 'das', 'buch', 'zusammen'], ['i', 'summarize', 'the', 'book']))
  120. >>> bitext.append(AlignedSent(['fasse', 'zusammen'], ['summarize']))
  121. >>> src_classes = {'the': 0, 'a': 0, 'small': 1, 'big': 1, 'house': 2, 'book': 2, 'is': 3, 'was': 3, 'i': 4, 'summarize': 5 }
  122. >>> trg_classes = {'das': 0, 'ein': 0, 'haus': 1, 'buch': 1, 'klein': 2, 'groß': 2, 'ist': 3, 'war': 3, 'ja': 4, 'ich': 5, 'fasse': 6, 'zusammen': 6 }
  123. >>> ibm5 = IBMModel5(bitext, 5, src_classes, trg_classes)
  124. >>> print(round(ibm5.head_vacancy_table[1][1][1], 3))
  125. 1.0
  126. >>> print(round(ibm5.head_vacancy_table[2][1][1], 3))
  127. 0.0
  128. >>> print(round(ibm5.non_head_vacancy_table[3][3][6], 3))
  129. 1.0
  130. >>> print(round(ibm5.fertility_table[2]['summarize'], 3))
  131. 1.0
  132. >>> print(round(ibm5.fertility_table[1]['book'], 3))
  133. 1.0
  134. >>> print(ibm5.p1)
  135. 0.033...
  136. >>> test_sentence = bitext[2]
  137. >>> test_sentence.words
  138. ['das', 'buch', 'ist', 'ja', 'klein']
  139. >>> test_sentence.mots
  140. ['the', 'book', 'is', 'small']
  141. >>> test_sentence.alignment
  142. Alignment([(0, 0), (1, 1), (2, 2), (3, None), (4, 3)])
  143. """
  144. MIN_SCORE_FACTOR = 0.2
  145. """
  146. Alignments with scores below this factor are pruned during sampling
  147. """
  148. def __init__(
  149. self,
  150. sentence_aligned_corpus,
  151. iterations,
  152. source_word_classes,
  153. target_word_classes,
  154. probability_tables=None,
  155. ):
  156. """
  157. Train on ``sentence_aligned_corpus`` and create a lexical
  158. translation model, vacancy models, a fertility model, and a
  159. model for generating NULL-aligned words.
  160. Translation direction is from ``AlignedSent.mots`` to
  161. ``AlignedSent.words``.
  162. :param sentence_aligned_corpus: Sentence-aligned parallel corpus
  163. :type sentence_aligned_corpus: list(AlignedSent)
  164. :param iterations: Number of iterations to run training algorithm
  165. :type iterations: int
  166. :param source_word_classes: Lookup table that maps a source word
  167. to its word class, the latter represented by an integer id
  168. :type source_word_classes: dict[str]: int
  169. :param target_word_classes: Lookup table that maps a target word
  170. to its word class, the latter represented by an integer id
  171. :type target_word_classes: dict[str]: int
  172. :param probability_tables: Optional. Use this to pass in custom
  173. probability values. If not specified, probabilities will be
  174. set to a uniform distribution, or some other sensible value.
  175. If specified, all the following entries must be present:
  176. ``translation_table``, ``alignment_table``,
  177. ``fertility_table``, ``p1``, ``head_distortion_table``,
  178. ``non_head_distortion_table``, ``head_vacancy_table``,
  179. ``non_head_vacancy_table``. See ``IBMModel``, ``IBMModel4``,
  180. and ``IBMModel5`` for the type and purpose of these tables.
  181. :type probability_tables: dict[str]: object
  182. """
  183. super(IBMModel5, self).__init__(sentence_aligned_corpus)
  184. self.reset_probabilities()
  185. self.src_classes = source_word_classes
  186. self.trg_classes = target_word_classes
  187. if probability_tables is None:
  188. # Get probabilities from IBM model 4
  189. ibm4 = IBMModel4(
  190. sentence_aligned_corpus,
  191. iterations,
  192. source_word_classes,
  193. target_word_classes,
  194. )
  195. self.translation_table = ibm4.translation_table
  196. self.alignment_table = ibm4.alignment_table
  197. self.fertility_table = ibm4.fertility_table
  198. self.p1 = ibm4.p1
  199. self.head_distortion_table = ibm4.head_distortion_table
  200. self.non_head_distortion_table = ibm4.non_head_distortion_table
  201. self.set_uniform_probabilities(sentence_aligned_corpus)
  202. else:
  203. # Set user-defined probabilities
  204. self.translation_table = probability_tables['translation_table']
  205. self.alignment_table = probability_tables['alignment_table']
  206. self.fertility_table = probability_tables['fertility_table']
  207. self.p1 = probability_tables['p1']
  208. self.head_distortion_table = probability_tables['head_distortion_table']
  209. self.non_head_distortion_table = probability_tables[
  210. 'non_head_distortion_table'
  211. ]
  212. self.head_vacancy_table = probability_tables['head_vacancy_table']
  213. self.non_head_vacancy_table = probability_tables['non_head_vacancy_table']
  214. for n in range(0, iterations):
  215. self.train(sentence_aligned_corpus)
  216. def reset_probabilities(self):
  217. super(IBMModel5, self).reset_probabilities()
  218. self.head_vacancy_table = defaultdict(
  219. lambda: defaultdict(lambda: defaultdict(lambda: self.MIN_PROB))
  220. )
  221. """
  222. dict[int][int][int]: float. Probability(vacancy difference |
  223. number of remaining valid positions,target word class).
  224. Values accessed as ``head_vacancy_table[dv][v_max][trg_class]``.
  225. """
  226. self.non_head_vacancy_table = defaultdict(
  227. lambda: defaultdict(lambda: defaultdict(lambda: self.MIN_PROB))
  228. )
  229. """
  230. dict[int][int][int]: float. Probability(vacancy difference |
  231. number of remaining valid positions,target word class).
  232. Values accessed as ``non_head_vacancy_table[dv][v_max][trg_class]``.
  233. """
  234. def set_uniform_probabilities(self, sentence_aligned_corpus):
  235. """
  236. Set vacancy probabilities uniformly to
  237. 1 / cardinality of vacancy difference values
  238. """
  239. max_m = longest_target_sentence_length(sentence_aligned_corpus)
  240. # The maximum vacancy difference occurs when a word is placed in
  241. # the last available position m of the target sentence and the
  242. # previous word position has no vacancies.
  243. # The minimum is 1-max_v, when a word is placed in the first
  244. # available position and the previous word is placed beyond the
  245. # last available position.
  246. # Thus, the number of possible vacancy difference values is
  247. # (max_v) - (1-max_v) + 1 = 2 * max_v.
  248. if max_m > 0 and (1 / (2 * max_m)) < IBMModel.MIN_PROB:
  249. warnings.warn(
  250. "A target sentence is too long ("
  251. + str(max_m)
  252. + " words). Results may be less accurate."
  253. )
  254. for max_v in range(1, max_m + 1):
  255. for dv in range(1, max_m + 1):
  256. initial_prob = 1 / (2 * max_v)
  257. self.head_vacancy_table[dv][max_v] = defaultdict(lambda: initial_prob)
  258. self.head_vacancy_table[-(dv - 1)][max_v] = defaultdict(
  259. lambda: initial_prob
  260. )
  261. self.non_head_vacancy_table[dv][max_v] = defaultdict(
  262. lambda: initial_prob
  263. )
  264. self.non_head_vacancy_table[-(dv - 1)][max_v] = defaultdict(
  265. lambda: initial_prob
  266. )
  267. def train(self, parallel_corpus):
  268. counts = Model5Counts()
  269. for aligned_sentence in parallel_corpus:
  270. l = len(aligned_sentence.mots)
  271. m = len(aligned_sentence.words)
  272. # Sample the alignment space
  273. sampled_alignments, best_alignment = self.sample(aligned_sentence)
  274. # Record the most probable alignment
  275. aligned_sentence.alignment = Alignment(
  276. best_alignment.zero_indexed_alignment()
  277. )
  278. # E step (a): Compute normalization factors to weigh counts
  279. total_count = self.prob_of_alignments(sampled_alignments)
  280. # E step (b): Collect counts
  281. for alignment_info in sampled_alignments:
  282. count = self.prob_t_a_given_s(alignment_info)
  283. normalized_count = count / total_count
  284. for j in range(1, m + 1):
  285. counts.update_lexical_translation(
  286. normalized_count, alignment_info, j
  287. )
  288. slots = Slots(m)
  289. for i in range(1, l + 1):
  290. counts.update_vacancy(
  291. normalized_count, alignment_info, i, self.trg_classes, slots
  292. )
  293. counts.update_null_generation(normalized_count, alignment_info)
  294. counts.update_fertility(normalized_count, alignment_info)
  295. # M step: Update probabilities with maximum likelihood estimates
  296. # If any probability is less than MIN_PROB, clamp it to MIN_PROB
  297. existing_alignment_table = self.alignment_table
  298. self.reset_probabilities()
  299. self.alignment_table = existing_alignment_table # don't retrain
  300. self.maximize_lexical_translation_probabilities(counts)
  301. self.maximize_vacancy_probabilities(counts)
  302. self.maximize_fertility_probabilities(counts)
  303. self.maximize_null_generation_probabilities(counts)
  304. def sample(self, sentence_pair):
  305. """
  306. Sample the most probable alignments from the entire alignment
  307. space according to Model 4
  308. Note that Model 4 scoring is used instead of Model 5 because the
  309. latter is too expensive to compute.
  310. First, determine the best alignment according to IBM Model 2.
  311. With this initial alignment, use hill climbing to determine the
  312. best alignment according to a IBM Model 4. Add this
  313. alignment and its neighbors to the sample set. Repeat this
  314. process with other initial alignments obtained by pegging an
  315. alignment point. Finally, prune alignments that have
  316. substantially lower Model 4 scores than the best alignment.
  317. :param sentence_pair: Source and target language sentence pair
  318. to generate a sample of alignments from
  319. :type sentence_pair: AlignedSent
  320. :return: A set of best alignments represented by their ``AlignmentInfo``
  321. and the best alignment of the set for convenience
  322. :rtype: set(AlignmentInfo), AlignmentInfo
  323. """
  324. sampled_alignments, best_alignment = super(IBMModel5, self).sample(
  325. sentence_pair
  326. )
  327. return self.prune(sampled_alignments), best_alignment
  328. def prune(self, alignment_infos):
  329. """
  330. Removes alignments from ``alignment_infos`` that have
  331. substantially lower Model 4 scores than the best alignment
  332. :return: Pruned alignments
  333. :rtype: set(AlignmentInfo)
  334. """
  335. alignments = []
  336. best_score = 0
  337. for alignment_info in alignment_infos:
  338. score = IBMModel4.model4_prob_t_a_given_s(alignment_info, self)
  339. best_score = max(score, best_score)
  340. alignments.append((alignment_info, score))
  341. threshold = IBMModel5.MIN_SCORE_FACTOR * best_score
  342. alignments = [a[0] for a in alignments if a[1] > threshold]
  343. return set(alignments)
  344. def hillclimb(self, alignment_info, j_pegged=None):
  345. """
  346. Starting from the alignment in ``alignment_info``, look at
  347. neighboring alignments iteratively for the best one, according
  348. to Model 4
  349. Note that Model 4 scoring is used instead of Model 5 because the
  350. latter is too expensive to compute.
  351. There is no guarantee that the best alignment in the alignment
  352. space will be found, because the algorithm might be stuck in a
  353. local maximum.
  354. :param j_pegged: If specified, the search will be constrained to
  355. alignments where ``j_pegged`` remains unchanged
  356. :type j_pegged: int
  357. :return: The best alignment found from hill climbing
  358. :rtype: AlignmentInfo
  359. """
  360. alignment = alignment_info # alias with shorter name
  361. max_probability = IBMModel4.model4_prob_t_a_given_s(alignment, self)
  362. while True:
  363. old_alignment = alignment
  364. for neighbor_alignment in self.neighboring(alignment, j_pegged):
  365. neighbor_probability = IBMModel4.model4_prob_t_a_given_s(
  366. neighbor_alignment, self
  367. )
  368. if neighbor_probability > max_probability:
  369. alignment = neighbor_alignment
  370. max_probability = neighbor_probability
  371. if alignment == old_alignment:
  372. # Until there are no better alignments
  373. break
  374. alignment.score = max_probability
  375. return alignment
  376. def prob_t_a_given_s(self, alignment_info):
  377. """
  378. Probability of target sentence and an alignment given the
  379. source sentence
  380. """
  381. probability = 1.0
  382. MIN_PROB = IBMModel.MIN_PROB
  383. slots = Slots(len(alignment_info.trg_sentence) - 1)
  384. def null_generation_term():
  385. # Binomial distribution: B(m - null_fertility, p1)
  386. value = 1.0
  387. p1 = self.p1
  388. p0 = 1 - p1
  389. null_fertility = alignment_info.fertility_of_i(0)
  390. m = len(alignment_info.trg_sentence) - 1
  391. value *= pow(p1, null_fertility) * pow(p0, m - 2 * null_fertility)
  392. if value < MIN_PROB:
  393. return MIN_PROB
  394. # Combination: (m - null_fertility) choose null_fertility
  395. for i in range(1, null_fertility + 1):
  396. value *= (m - null_fertility - i + 1) / i
  397. return value
  398. def fertility_term():
  399. value = 1.0
  400. src_sentence = alignment_info.src_sentence
  401. for i in range(1, len(src_sentence)):
  402. fertility = alignment_info.fertility_of_i(i)
  403. value *= (
  404. factorial(fertility)
  405. * self.fertility_table[fertility][src_sentence[i]]
  406. )
  407. if value < MIN_PROB:
  408. return MIN_PROB
  409. return value
  410. def lexical_translation_term(j):
  411. t = alignment_info.trg_sentence[j]
  412. i = alignment_info.alignment[j]
  413. s = alignment_info.src_sentence[i]
  414. return self.translation_table[t][s]
  415. def vacancy_term(i):
  416. value = 1.0
  417. tablet = alignment_info.cepts[i]
  418. tablet_length = len(tablet)
  419. total_vacancies = slots.vacancies_at(len(slots))
  420. # case 1: NULL-aligned words
  421. if tablet_length == 0:
  422. return value
  423. # case 2: head word
  424. j = tablet[0]
  425. previous_cept = alignment_info.previous_cept(j)
  426. previous_center = alignment_info.center_of_cept(previous_cept)
  427. dv = slots.vacancies_at(j) - slots.vacancies_at(previous_center)
  428. max_v = total_vacancies - tablet_length + 1
  429. trg_class = self.trg_classes[alignment_info.trg_sentence[j]]
  430. value *= self.head_vacancy_table[dv][max_v][trg_class]
  431. slots.occupy(j) # mark position as occupied
  432. total_vacancies -= 1
  433. if value < MIN_PROB:
  434. return MIN_PROB
  435. # case 3: non-head words
  436. for k in range(1, tablet_length):
  437. previous_position = tablet[k - 1]
  438. previous_vacancies = slots.vacancies_at(previous_position)
  439. j = tablet[k]
  440. dv = slots.vacancies_at(j) - previous_vacancies
  441. max_v = total_vacancies - tablet_length + k + 1 - previous_vacancies
  442. trg_class = self.trg_classes[alignment_info.trg_sentence[j]]
  443. value *= self.non_head_vacancy_table[dv][max_v][trg_class]
  444. slots.occupy(j) # mark position as occupied
  445. total_vacancies -= 1
  446. if value < MIN_PROB:
  447. return MIN_PROB
  448. return value
  449. # end nested functions
  450. # Abort computation whenever probability falls below MIN_PROB at
  451. # any point, since MIN_PROB can be considered as zero
  452. probability *= null_generation_term()
  453. if probability < MIN_PROB:
  454. return MIN_PROB
  455. probability *= fertility_term()
  456. if probability < MIN_PROB:
  457. return MIN_PROB
  458. for j in range(1, len(alignment_info.trg_sentence)):
  459. probability *= lexical_translation_term(j)
  460. if probability < MIN_PROB:
  461. return MIN_PROB
  462. for i in range(1, len(alignment_info.src_sentence)):
  463. probability *= vacancy_term(i)
  464. if probability < MIN_PROB:
  465. return MIN_PROB
  466. return probability
  467. def maximize_vacancy_probabilities(self, counts):
  468. MIN_PROB = IBMModel.MIN_PROB
  469. head_vacancy_table = self.head_vacancy_table
  470. for dv, max_vs in counts.head_vacancy.items():
  471. for max_v, trg_classes in max_vs.items():
  472. for t_cls in trg_classes:
  473. estimate = (
  474. counts.head_vacancy[dv][max_v][t_cls]
  475. / counts.head_vacancy_for_any_dv[max_v][t_cls]
  476. )
  477. head_vacancy_table[dv][max_v][t_cls] = max(estimate, MIN_PROB)
  478. non_head_vacancy_table = self.non_head_vacancy_table
  479. for dv, max_vs in counts.non_head_vacancy.items():
  480. for max_v, trg_classes in max_vs.items():
  481. for t_cls in trg_classes:
  482. estimate = (
  483. counts.non_head_vacancy[dv][max_v][t_cls]
  484. / counts.non_head_vacancy_for_any_dv[max_v][t_cls]
  485. )
  486. non_head_vacancy_table[dv][max_v][t_cls] = max(estimate, MIN_PROB)
  487. class Model5Counts(Counts):
  488. """
  489. Data object to store counts of various parameters during training.
  490. Includes counts for vacancies.
  491. """
  492. def __init__(self):
  493. super(Model5Counts, self).__init__()
  494. self.head_vacancy = defaultdict(
  495. lambda: defaultdict(lambda: defaultdict(lambda: 0.0))
  496. )
  497. self.head_vacancy_for_any_dv = defaultdict(lambda: defaultdict(lambda: 0.0))
  498. self.non_head_vacancy = defaultdict(
  499. lambda: defaultdict(lambda: defaultdict(lambda: 0.0))
  500. )
  501. self.non_head_vacancy_for_any_dv = defaultdict(lambda: defaultdict(lambda: 0.0))
  502. def update_vacancy(self, count, alignment_info, i, trg_classes, slots):
  503. """
  504. :param count: Value to add to the vacancy counts
  505. :param alignment_info: Alignment under consideration
  506. :param i: Source word position under consideration
  507. :param trg_classes: Target word classes
  508. :param slots: Vacancy states of the slots in the target sentence.
  509. Output parameter that will be modified as new words are placed
  510. in the target sentence.
  511. """
  512. tablet = alignment_info.cepts[i]
  513. tablet_length = len(tablet)
  514. total_vacancies = slots.vacancies_at(len(slots))
  515. # case 1: NULL aligned words
  516. if tablet_length == 0:
  517. return # ignore zero fertility words
  518. # case 2: head word
  519. j = tablet[0]
  520. previous_cept = alignment_info.previous_cept(j)
  521. previous_center = alignment_info.center_of_cept(previous_cept)
  522. dv = slots.vacancies_at(j) - slots.vacancies_at(previous_center)
  523. max_v = total_vacancies - tablet_length + 1
  524. trg_class = trg_classes[alignment_info.trg_sentence[j]]
  525. self.head_vacancy[dv][max_v][trg_class] += count
  526. self.head_vacancy_for_any_dv[max_v][trg_class] += count
  527. slots.occupy(j) # mark position as occupied
  528. total_vacancies -= 1
  529. # case 3: non-head words
  530. for k in range(1, tablet_length):
  531. previous_position = tablet[k - 1]
  532. previous_vacancies = slots.vacancies_at(previous_position)
  533. j = tablet[k]
  534. dv = slots.vacancies_at(j) - previous_vacancies
  535. max_v = total_vacancies - tablet_length + k + 1 - previous_vacancies
  536. trg_class = trg_classes[alignment_info.trg_sentence[j]]
  537. self.non_head_vacancy[dv][max_v][trg_class] += count
  538. self.non_head_vacancy_for_any_dv[max_v][trg_class] += count
  539. slots.occupy(j) # mark position as occupied
  540. total_vacancies -= 1
  541. class Slots(object):
  542. """
  543. Represents positions in a target sentence. Used to keep track of
  544. which slot (position) is occupied.
  545. """
  546. def __init__(self, target_sentence_length):
  547. self._slots = [False] * (target_sentence_length + 1) # 1-indexed
  548. def occupy(self, position):
  549. """
  550. :return: Mark slot at ``position`` as occupied
  551. """
  552. self._slots[position] = True
  553. def vacancies_at(self, position):
  554. """
  555. :return: Number of vacant slots up to, and including, ``position``
  556. """
  557. vacancies = 0
  558. for k in range(1, position + 1):
  559. if not self._slots[k]:
  560. vacancies += 1
  561. return vacancies
  562. def __len__(self):
  563. return len(self._slots) - 1 # exclude dummy zeroeth element