並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 8 件 / 8件

新着順 人気順

define conditional probability distribution functionの検索結果1 - 8 件 / 8件

  • How a simple Linux kernel memory corruption bug can lead to complete system compromise

    In this case, reallocating the object as one of those three types didn't seem to me like a nice way forward (although it should be possible to exploit this somehow with some effort, e.g. by using count.counter to corrupt the buf field of seq_file). Also, some systems might be using the slab_nomerge kernel command line flag, which disables this merging behavior. Another approach that I didn't look

    • The Annotated Diffusion Model

      In this blog post, we'll take a deeper look into Denoising Diffusion Probabilistic Models (also known as DDPMs, diffusion models, score-based generative models or simply autoencoders) as researchers have been able to achieve remarkable results with them for (un)conditional image/audio/video generation. Popular examples (at the time of writing) include GLIDE and DALL-E 2 by OpenAI, Latent Diffusion

        The Annotated Diffusion Model
      • Aman's AI Journal • Primers • Ilya Sutskever's Top 30

        Ilya Sutskever’s Top 30 Reading List The First Law of Complexodynamics The Unreasonable Effectiveness of Recurrent Neural Networks Understanding LSTM Networks Recurrent Neural Network Regularization Keeping Neural Networks Simple by Minimizing the Description Length of the Weights Pointer Networks ImageNet Classification with Deep Convolutional Neural Networks Order Matters: Sequence to Sequence f

        • https://deeplearningtheory.com/PDLT.pdf

          The Principles of Deep Learning Theory An Effective Theory Approach to Understanding Neural Networks Daniel A. Roberts and Sho Yaida based on research in collaboration with Boris Hanin drob@mit.edu, shoyaida@fb.com ii Contents Preface vii 0 Initialization 1 0.1 An Effective Theory Approach . . . . . . . . . . . . . . . . . . . . . . . . 2 0.2 The Theoretical Minimum . . . . . . . . . . . . . . . .

          • Large Text Compression Benchmark

             Large Text Compression Benchmark Matt Mahoney Last update: Mar. 25, 2026. history This competition ranks lossless data compression programs by the compressed size (including the size of the decompression program) of the first 109 bytes of the XML text dump of the English version of Wikipedia on Mar. 3, 2006. About the test data. The goal of this benchmark is not to find the best overall compress

            • Diffusion Models as a kind of VAE

              \[\require{cancel}\] Introduction Recently I have been studying a class of generative models known as diffusion probabilistic models. These models were proposed by Sohl-Dickstein et al. in 2015 [1], however they first caught my attention last year when Ho et al. released “Denoising Diffusion Probabilistic Models” [2]. Building on [1], Ho et al. showed that a model trained with a stable variational

              • The Little Book of Deep Learning

                The Little Book of Deep Learning François Fleuret François Fleuret is a professor of computer sci- ence at the University of Geneva, Switzerland. The cover illustration is a schematic of the Neocognitron by Fukushima [1980], a key an- cestor of deep neural networks. This ebook is formatted to fit on a phone screen. Contents Contents 5 List of figures 7 Foreword 8 I Foundations 10 1 Machine Learnin

                • Why model calibration matters and how to achieve it

                  by LEE RICHARDSON & TAYLOR POSPISIL Calibrated models make probabilistic predictions that match real world probabilities. This post explains why calibration matters, and how to achieve it. It discusses practical issues that calibrated predictions solve and presents a flexible framework to calibrate any classifier. Calibration applies in many applications, and hence the practicing data scientist mu

                    Why model calibration matters and how to achieve it
                  1