2016年10月11日のブックマーク (2件)

  • Understanding intermediate layers using linear classifier probes

    Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can b

    kaz_uki_1014
    kaz_uki_1014 2016/10/11
    Neural network models have a reputation for being black boxes. We propose a new method to understand better the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as the aux
  • Iclr2016 vaeまとめ

    This document summarizes a presentation about variational autoencoders (VAEs) presented at the ICLR 2016 conference. The document discusses 5 VAE-related papers presented at ICLR 2016, including Importance Weighted Autoencoders, The Variational Fair Autoencoder, Generating Images from Captions with Attention, Variational Gaussian Process, and Variationally Auto-Encoded Deep Gaussian Processes. It

    Iclr2016 vaeまとめ
    kaz_uki_1014
    kaz_uki_1014 2016/10/11
    Iclr2016 vaeまとめ Iclr2016 vaeまとめ1.ICLR2016 VAEまとめ鈴⽊雅⼤2.今回の発表について¤ 今⽇の内容¤ ICLRで発表されたVAE関連を中⼼に発表します.¤ ICLR 2016¤ 2016年5⽉2⽇~4⽇¤ プエルトリコ,サンフアン¤ 発表数:¤ 会議トラック