The document summarizes two papers on language modeling techniques. [Zhang+ ACL2014] proposes applying Kneser-Ney smoothing to expected counts when training data has fractional weights, outperforming other methods on a domain adaptation task. [Pickhardt+ ACL2014] presents a generalized language model combining skipped n-grams and modified Kneser-Ney smoothing, reducing perplexity by 25.7% on small
![ソーシャルメディアの多言語判定 #SoC2014](https://cdn-ak-scissors.b.st-hatena.com/image/square/373e17b1b0445604b97e90a7662833b1cbe6fd82/height=288;version=1;width=512/https%3A%2F%2Fcdn.slidesharecdn.com%2Fss_thumbnails%2Fsoc2014-140621053600-phpapp02-thumbnail.jpg%3Fwidth%3D640%26height%3D640%26fit%3Dbounds)