“Neural Network are not black boxes. They are a big pile of linear algebra.” - Randall Munroe, xkcd Machine learning has a wide range of models for tasks such as classification, regression, and clustering. Neural networks are one of the most successful models, having experienced a resurgence in use over the past decade due to improvements in computational power and advanced software libraries. The
Condensed Matter Physics (CMP) seeks to understand the microscopic interactions of matter at the quantum and atomistic levels, and describes how these interactions result in both mesoscopic and macroscopic properties. CMP overlaps with many other important branches of science, such as Chemistry, Materials Science, Statistical Physics, and High-Performance Computing. With the advancements in modern
Target scope of Conference Deep learning plays a central role in recent developments in research in artificial intelligence (AI). Various ideas based on physics are found in the research of deep learning, and consequently, deep learning and physics are related intimately. This international conference is dedicated to (1) applications of deep learning to physics, (2) discovering similarities among
The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as
8:30 - 8:40 : Opening remarks 8:40 - 9:10 : Invited talk: Andrea Montanari, Linearized two-layers neural networks in high dimension [Video/Slides] 9:10 - 9:40 : Invited talk: Lenka Zdeborova, Loss landscape and behaviour of algorithms in the spiked matrix-tensor model [Video/Slides] 9:40 - 10:20 : Poster spotlights [Video/Slides, Slides only] 10:20 - 11:00 : Break and poster discussion 11:00 - 11:
We apply techniques in natural language processing, computational linguistics, and machine-learning to investigate papers in hep-th and four related sections of the arXiv: hep-ph, hep-lat, gr-qc, and math-ph. All of the titles of papers in each of these sections, from the inception of the arXiv until the end of 2017, are extracted and treated as a corpus which we use to train the neural network Wo
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く