Muse: Text-To-Image Generation via Masked Generative Transformers Huiwen Chang*, Han Zhang*, Jarred Barber†, AJ Maschinot†, José Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein†, Yuanzhen Li†, Dilip Krishnan† *Equal contribution. †Core contribution. We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance
This month, Google forced out a prominent AI ethics researcher after she voiced frustration with the company for making her withdraw a research paper. The paper pointed out the risks of language-processing artificial intelligence, the type used in Google Search and other text analysis products. Among the risks is the large carbon footprint of developing this kind of AI technology. By some estimate
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly sc
If there’s one thing I’ve learned over the 15 years working on Google Search, it’s that people’s curiosity is endless. We see billions of searches every day, and 15 percent of those queries are ones we haven’t seen before--so we’ve built ways to return results for queries we can’t anticipate. When people like you or I come to Search, we aren’t always quite sure about the best way to formulate a qu
senior staff research scientist at Google DeepMind beenkim at csail dot mit dot edu I am interested in helping humans to communicate with complex machine learning models: not only by building tools (and tools to criticize them), but also studying their nature, compared to humans. Quanta magazine (written by John Pavlus) is a great description of what I do and why. I believe the language that human
ML has revolutionized vision, speech and language understanding and is being applied in many other fields. That’s an extraordinary achievement in the tech’s short history and even more impressive considering there is still no dedicated ML hardware. Back in January, Google AI Chief and former head of Google Brain Jeff Dean co-published the paper A New Golden Age in Computer Architecture: Empowering
Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be
Daniel Golovin (Google, Inc.);Benjamin Solnik (Google, Inc.);Subhodeep Moitra (Google, Inc.);Greg Kochanski (Google, Inc.);John Karro (Google, Inc.);D. Sculley (Google, Inc.) Abstract Any sufficiently complex system acts as a black box when it becomes easier to experiment with than to understand. Hence, black-box optimization has become increasingly important as systems have become more complex. I
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く