並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 22 件 / 22件

新着順 人気順

probability theoryの検索結果1 - 22 件 / 22件

  • 大学で読んだ情報科学関連の教科書 - ジョイジョイジョイ

    先日、博士(情報学)になりました。学部と大学院をあわせた 9 年間で読んだ情報科学関連の教科書・専門書を思い出を振り返りつつここにまとめます。私は授業はあまり聞かずに独学するタイプだったので、ここに挙げた書籍を通読すれば、大学に通わなくてもおおよそ情報学博士ほどの知識は身につくものと思われます。ただし、特に大学院で重要となる論文を読み書きすることについては本稿には含めておりません。それらについては論文読みの日課についてや論文の書き方などを参考にしてください。 joisino.hatenablog.com 凡例:(半端)とは、数章だけ読んだ場合か、最後まで読んだものの理解が浅く、今となっては薄ぼんやりとしか覚えていないことを指します。☆は特におすすめなことを表します。 学部一年 寺田 文行『線形代数 増訂版』 黒田 成俊『微分積分』 河野 敬雄『確率概論』 東京大学教養学部統計学教室『統計学

      大学で読んだ情報科学関連の教科書 - ジョイジョイジョイ
    • What We Learned from a Year of Building with LLMs (Part I)

      Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more It’s an exciting time to build with large language models (LLMs). Over the past year, LLMs have become “good enough” for real-world applications. The pace of improvements in LLMs, coupled with a parade of demos on social media, will fuel an estimated $200B

        What We Learned from a Year of Building with LLMs (Part I)
      • A non-mathematical introduction to Kalman Filters for programmers - Pravesh Koirala

        Read my manifesto on Code as an alternative to Mathematics. Code for this article can be found on this Colab Notebook should you choose to follow along. Why Kalman Filters? Kalman filters are ingenius. If you have never heard of them, then a very intuitive (and arguably reductive) way to think about them is to consider them as a funnel where you pour information from multiple noisy sources to cond

        • Optimizing your LLM in production

          Note: This blog post is also available as a documentation page on Transformers. Large Language Models (LLMs) such as GPT3/4, Falcon, and LLama are rapidly advancing in their ability to tackle human-centric tasks, establishing themselves as essential tools in modern knowledge-based industries. Deploying these models in real-world tasks remains challenging, however: To exhibit near-human text unders

            Optimizing your LLM in production
          • Daily Life:ヒュームの帰納の問題の再発見

            April 14, 2024 ヒュームの帰納の問題の再発見 ヒュームの帰納の問題は現代の科学哲学で帰納をめぐる哲学的問題を紹介する際、必ずといっていいほど言及される。このように定番になっていることから、「ヒュームの帰納の問題」がヒュームが『人間本性論』を公にして以来一貫して哲学の大問題として論じられてきたような印象を持つ人も多いかもしれない。かく言う私自身も科学哲学の歴史について調べ始めるまで、当然のようにヒュームの帰納の問題が二百数十年来の大問題だったと想定してきた。 しかし、少し調べれば分かるように、19世紀前半から中頃にかけての「帰納」をめぐる論争(ハーシェル、ヒューウェル、ミルらによるもの)では、ヒュームが指摘した論点は全く顧みられていない(このあたりは『科学哲学の源流をたどる』でも少し紹介したし、以下でも触れる)。では、ヒュームの帰納の問題を哲学的問題圏の中央へと押し出したのは誰

            • その研究 ChatGPT でいいんじゃないですか?LLM時代の対話システム研究.pdf

              K n o w ledge Acquisition & D i a l o g u e R e s e a r c h T e a m 知識獲得・対話研究チーム Knowledge Acquisition & Dialogue Research Team 奈良先端大 ロボット対話知能研究室 Intelligent robot dialogue laboratory, NAIST その研究 ChatGPT でいいんじゃないですか? ~LLM時代の対話システム研究~ 理化学研究所GRP/奈良先端科学技術大学院大学 吉野 幸一郎 その研究ChatGPTでいいんじゃないですか? 1 2023/08/31 ⒸKoichiro Yoshino, Guardian Robot Project, RIKEN K n o w ledge Acquisition & D i a l o g u e R e s

              • Deep Learning - Foundations and Concepts

                New: complete set of figures in PDF format available for download This book offers a comprehensive introduction to the central ideas that underpin deep learning. It is intended both for newcomers to machine learning and for those already experienced in the field. Covering key concepts relating to contemporary architectures and techniques, this essential book equips readers with a robust foundation

                • Happy New Year: GPT in 500 lines of SQL - EXPLAIN EXTENDED

                  Translations: Russian This year, the talk of the town was AI and how it can do everything for you. I like it when someone or something does everything for me. To this end, I decided to ask ChatGPT to write my New Year's post: "Hey ChatGPT. Can you implement a large language model in SQL?" "No, SQL is not suitable for implementing large language models. SQL is a language for managing and querying d

                    Happy New Year: GPT in 500 lines of SQL - EXPLAIN EXTENDED
                  • How Recent Google Updates Punish Good SEO: 50-Site Case Study - Zyppy SEO Consulting

                    How Recent Google Updates Punish Good SEO: 50-Site Case Study SEOs need to rethink “over-optimization” Are recent Google updates now targeting SEO practices to demote informational sites that are “too optimized?” Using metrics provided by Ahrefs (thank you, Patrick Stox!) and collecting thousands of data points across impacted sites, I conducted a 50-site case study to look for answers. To begin w

                      How Recent Google Updates Punish Good SEO: 50-Site Case Study - Zyppy SEO Consulting
                    • ZX Spectrum Raytracer - Gabriel Gambetta

                      I love raytracers; in fact I’ve written half a book about them. Probably less known is my love for the ZX Spectrum, the 1982 home computer I grew up with, and which started my interest in graphics and programming. This machine is so ridiculously underpowered for today’s standards (and even for 1980s standards), the inevitable question is, to what extent could I port the Computer Graphics from Scra

                      • Applied LLMs - What We’ve Learned From A Year of Building with LLMs

                        A practical guide to building successful LLM products, covering the tactical, operational, and strategic. Also published on O’Reilly Media in three parts: Tactical, Operational, Strategic. Also see podcast. It’s an exciting time to build with large language models (LLMs). Over the past year, LLMs have become “good enough” for real-world applications. And they’re getting better and cheaper every ye

                          Applied LLMs - What We’ve Learned From A Year of Building with LLMs
                        • How bad are search results? Let's compare Google, Bing, Marginalia, Kagi, Mwmbl, and ChatGPT

                          Marginalia does relatively well by sometimes providing decent but not great answers and then providing no answers or very obviously irrelevant answers to the questions it can't answer, with a relatively low rate of scams, lower than any other search engine (although, for these queries, ChatGPT returns zero scams and Marginalia returns some). Interestingly, Mwmbl lets users directly edit search res

                          • 'Effective Accelerationism' and the Pursuit of Cosmic Utopia

                            TD Original Dec 14, 2023 ‘Effective Accelerationism’ and the Pursuit of Cosmic Utopia How an arcane philosophical rift in Silicon Valley is shaping the race to build artificial general intelligence. All camps in the TESCREAL community ultimately share a fantasy about the distant future. Image: Adobe The recent ouster of Sam Altman from OpenAI, followed by his reinstatement within a week, triggered

                              'Effective Accelerationism' and the Pursuit of Cosmic Utopia
                            • A critical review of Marketing Mix Modeling — From hype to reality

                              Context Most companies spend large chunks of their budget on marketing. Often, without knowing the return of that investment. Marketing Mix Modeling has been promoted as the one method to shed light on the effect of marketing. Not quite coincidentally, this is mainly supported by people that have a self-serving interest to advocate MMM. Opposing standpoints are few and far between. In this post, I

                              • How web bloat impacts users with slow devices

                                At a first glance, the table seems about right, in that the sites that feel slow unless you have a super fast device show up as slow in the table (as in, max(LCP*,CPU)) is high on lower-end devices). When I polled folks about what platforms they thought would be fastest and slowest on our slow devices (Mastodon, Twitter, Threads), they generally correctly predicted that Wordpress and Ghost and Wor

                                • fast.ai - Can LLMs learn from a single example?

                                  We’ve noticed an unusual training pattern in fine-tuning LLMs. At first we thought it’s a bug, but now we think it shows LLMs can learn effectively from a single example. Summary: recently while fine-tuning a large language model (LLM) on multiple-choice science exam questions, we observed some highly unusual training loss curves. In particular, it appeared the model was able to rapidly memorize e

                                    fast.ai - Can LLMs learn from a single example?
                                  • Book: Alice’s Adventures in a differentiable wonderland

                                    Book: Alice’s Adventures in a differentiable wonderland Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more. Stripped of anything else, neural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular

                                    • Introduction to Decision Trees in Supervised Learning

                                      The Decision Tree algorithm is a type of tree-based modeling under Supervised Machine Learning. Decision Trees are primarily used to solve classification problems (the algorithm, in this case, is called the Classification Tree), but they can also be used to solve regression problems (the algorithm, in this case, is called the Regression Tree). The concept of trees is found in graph theory and is u

                                        Introduction to Decision Trees in Supervised Learning
                                      • An Introduction to Bayesian Network for Machine Learning

                                        A Bayesian network is a graphical model representing probabilistic relationships among variables. Introduction Probabilistic models are based on the theory of probability. I guess that was quite self-explanatory, considering it is in the name. Probabilistic models consider the fact that randomness plays a role in predicting future outcomes. The opposite of randomness is deterministic, which tells

                                          An Introduction to Bayesian Network for Machine Learning
                                        • Building a Simple Artificial Neural Network in JavaScript

                                          This article will discuss building a simple neural network using JavaScript. However, let’s first check what deep neural networks and artificial neural networks are. Deep Neural Network and Artificial Neural Network Artificial Neural Networks (ANNs) and Deep Neural Networks (DNNs) are related concepts, but they are different. The inspiration behind these artificial neural networks for machine lear

                                            Building a Simple Artificial Neural Network in JavaScript
                                          • Google Colab で distilabel を試す|npaka

                                            「Google Colab」で「distilabel」を試したので、まとめました。 1. distilabel「distilabel」は、LLMを使用してLLM用のデータセットを作成するためのAI Feadback (AIF) フレームワークです。 ・LLMの最も一般的なライブラリ・APIとの統合 (HuggingFace Transformers、OpenAI、vLLMなど) ・Self-Instruct、Preferenceデータセットなどの複数のタスクに対応 ・データセットを Argillaにエクスポートすることで、データ探索とさらなるアノテーションが容易に 2. セットアップGoogle Colabでのセットアップ手順は、次のとおりです。 (1) パッケージのインストール。 # パッケージのインストール !pip install distilabel[openai,argilla]

                                              Google Colab で distilabel を試す|npaka
                                            • The sad state of property-based testing libraries

                                              The sad state of property-based testing libraries Posted on Jul 2, 2024 Property-based testing is a rare example of academic research that has made it to the mainstream in less than 30 years. Under the slogan “don’t write tests, generate them” property-based testing has gained support from a diverse group of programming language communities. In fact, the Wikipedia page of the original property-bas

                                              1