Nintendo Switch 2: Everything we know about the coming release
Comparison of three technical papers presented at SIGGRAPH 2017 in the "Speech and Facial Animation" session. http://s2017.siggraph.org/technical-papers/sessions/speech-and-facial-animation Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen. Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion. ACM Trans. Graph. 36, 4, Article 94 (July 2017). http://r
Publications Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the var
In Proceedings of SCA'17, Los Angeles, CA, USA, July 28-30, 2017 http://research.nvidia.com/publication/facial-performance-capture-deep-neural-networks Samuli Laine (NVIDIA) Tero Karras (NVIDIA) Timo Aila (NVIDIA) Antti Herva (Remedy Entertainment) Shunsuke Saito (Pinscreen, University of Southern California) Ronald Yu (Pinscreen, University of Southern California) Hao Li (USC Institute for Crea
ACM Transactions on Graphics (Proc. SIGGRAPH 2017) http://research.nvidia.com/publication/2017-07_Audio-Driven-Facial-Animation Tero Karras (NVIDIA) Timo Aila (NVIDIA) Samuli Laine (NVIDIA) Antti Herva (Remedy Entertainment) Jaakko Lehtinen (NVIDIA and Aalto University) We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く