タグ

関連タグで絞り込む (1)

タグの絞り込みを解除

MLPに関するRyobotのブックマーク (2)

  • Pay Attention to MLPs

    Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Tra

    Ryobot
    Ryobot 2021/06/05
    gMLPはMLP-Mixerと同じく入力を転置してMLPに突っ込むネットワーク構造.実験でVision分野だけでなく,Text分野でもTransformerに匹敵する性能だとわかった
  • MLP-Mixer: An all-MLP Architecture for Vision

    Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-

    Ryobot
    Ryobot 2021/06/05
    入力を転置してMLPに突っ込むとパッチ間の情報を拾えるからCNN/Attentionいらなくね?って研究.性能が飛び抜けていいわけではないけどMLPを再評価する
  • 1