並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 40 件 / 101件

新着順 人気順

important algorithms in codingの検索結果1 - 40 件 / 101件

  • The End of Programming as We Know It

    Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more Betty Jean Jennings and Frances Bilas (right) program the ENIAC in 1946. Via the Computer History Museum Eventually, interpreted languages, which are much easier to debug, became the norm. BASIC, one of the first of these to hit the big time, was at first s

      The End of Programming as We Know It
    • Why Go? · microsoft/typescript-go · Discussion #411

      Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did

        Why Go? · microsoft/typescript-go · Discussion #411
      • xz-utils backdoor situation (CVE-2024-3094)

        xz-backdoor.md FAQ on the xz-utils backdoor (CVE-2024-3094) This is a living document. Everything in this document is made in good faith of being accurate, but like I just said; we don't yet know everything about what's going on. Update: I've disabled comments as of 2025-01-26 to avoid everyone having notifications for something a year on if someone wants to suggest a correction. Folks are free to

          xz-utils backdoor situation (CVE-2024-3094)
        • 転置インデックスの圧縮技法

          転置インデックスは、検索エンジンの実装において、中心的な役割を果たすデータ構造である。 転置インデックスのデータ構造とアルゴリズムは、クエリ処理アルゴリズムとともに、検索エンジンの性能に直結する。とくに大規模な検索エンジンにおいては、キャッシュ効率を高めてクエリ処理を高速化するために、転置インデックスの圧縮は必要不可欠となっている。 この記事では、転置インデックス、とくにポスティングリストの圧縮について、近年の手法を簡単にまとめる。 目次 転置インデックスの基本 転置インデックスのデータ構造と特性 転置インデックスのアクセスパターン 近年のインデックス圧縮技法 Variable-Byte Family VByte Varint-GB Varint-G8IU Masked-VByte Stream-VByte Opt-VByte Simple Family Simple9 Simple16

            転置インデックスの圧縮技法
          • An Interview With Linus Torvalds: Linux and Git - Part 1 30 Years Of Linux

            Jeremy founded Tag1 Consulting in 2007. He has been a contributing core Drupal developer since 2002, and helped establish Drupal as a successful CMS through the early popularity of his personal blog, KernelTrap.org. Over the years, he authored and maintained the core statistics module and throttle module, as well as the pager logic and the initial Drupal 5 installer. He continues to contribute to

              An Interview With Linus Torvalds: Linux and Git - Part 1 30 Years Of Linux
            • Reflections on OpenAI

              I left OpenAI three weeks ago. I had joined the company back in May 2024. I wanted to share my reflections because there's a lot of smoke and noise around what OpenAI is doing, but not a lot of first-hand accounts of what the culture of working there actually feels like. Nabeel Qureshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to

                Reflections on OpenAI
              • Code Reviews 101 - The Basics | Sema

                Code improves with multiple reviews and revisions, and this process isn’t something that can be done alone. Spotting errors in code design is difficult at the best of times — and the closer you are to the work, the harder it can be to critique. That’s where code reviews come in. The beginning: introducing code reviewsWhat is a code review? Code improves with multiple reviews and revisions, and thi

                  Code Reviews 101 - The Basics | Sema
                • Replit — How to train your own Large Language Models

                  Learn how Replit trains Large Language Models (LLMs) using Databricks, Hugging Face, and MosaicML IntroductionLarge Language Models, like OpenAI's GPT-4 or Google's PaLM, have taken the world of artificial intelligence by storm. Yet most companies don't currently have the ability to train these models, and are completely reliant on only a handful of large tech firms as providers of the technology.

                    Replit — How to train your own Large Language Models
                  • 100+ Best GitHub Repositories For Machine Learning

                    There are millions of GitHub repos and filtering them is an insane amount of work. It takes a huge time, effort, and a lot more. We have done this for you. In this article, we’ll share a curated list of 100+ widely-known, recommended, and most popular repositories and open source GitHub projects for Machine Learning and Deep Learning. So without further ado, Let’s see all the hubs created by exper

                      100+ Best GitHub Repositories For Machine Learning
                    • What makes Claude Code so damn good (and how to recreate that magic in your agent)!?

                      What makes Claude Code so damn good (and how to recreate that magic in your agent)!?/ vivek / 2025-08-21 Claude Code is the most delightful AI agent/workflow I have used so far. Not only does it make targeted edits or vibe coding throwaway tools less annoying, using Claude Code makes me happy. It has enough autonomy to do interesting things, while not inducing a jarring loss of control like some o

                        What makes Claude Code so damn good (and how to recreate that magic in your agent)!?
                      • Hypershell: A Type-Level DSL for Shell-Scripting in Rust | Context-Generic Programming

                        Discuss on Reddit, Lobsters, and Hacker News. Summary I am thrilled to introduce Hypershell, a modular, type-level domain-specific language (DSL) for writing shell-script-like programs in Rust. Hypershell is powered by context-generic programming (CGP), which makes it possible for users to extend or modify both the language syntax and semantics. Table of Contents Estimated reading time: 1~2 hours

                          Hypershell: A Type-Level DSL for Shell-Scripting in Rust | Context-Generic Programming
                        • Optimizing your LLM in production

                          Note: This blog post is also available as a documentation page on Transformers. Large Language Models (LLMs) such as GPT3/4, Falcon, and LLama are rapidly advancing in their ability to tackle human-centric tasks, establishing themselves as essential tools in modern knowledge-based industries. Deploying these models in real-world tasks remains challenging, however: To exhibit near-human text unders

                            Optimizing your LLM in production
                          • An interface is not an interface - Recent thoughts about clean coding - Spacely Tech Blog

                            Introduction Recently I've had to work on code which seemed to be based on clean architecture, but after a while I concluded that it is probably not and it made me think about clean coding in general and the principles of clean architecture in particular. Is it about a set of rules, and we are guaranteed to achieve cleanness as long as we follow them? Or is it about abstract principles, which we c

                              An interface is not an interface - Recent thoughts about clean coding - Spacely Tech Blog
                            • Agents

                              Intelligent agents are considered by many to be the ultimate goal of AI. The classic book by Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Prentice Hall, 1995), defines the field of AI research as “the study and design of rational agents.” The unprecedented capabilities of foundation models have opened the door to agentic applications that were previously unimaginabl

                                Agents
                              • FFmpeg - Ultimate Guide | IMG.LY Blog

                                These last two sometimes are referred to as "8 bit" or "10 bit" respectively, especially when talking about videos. That means 8/10 bits per single color channel. TransparencySome image formats support an additional channel together with the red, green, and blue components: the alpha channel. The alpha channel determines how transparent a single pixel is, and it can have different bit-depths, it i

                                  FFmpeg - Ultimate Guide | IMG.LY Blog
                                • Golang Mini Reference 2022: A Quick Guide to the Modern Go Programming Language (REVIEW COPY)

                                  Golang Mini Reference 2022 A Quick Guide to the Modern Go Programming Language (REVIEW COPY) Harry Yoon Version 0.9.0, 2022-08-24 REVIEW COPY This is review copy, not to be shared or distributed to others. Please forward any feedback or comments to the author. • feedback@codingbookspress.com The book is tentatively scheduled to be published on September 14th, 2022. We hope that when the release da

                                  • Preparing for the Systems Design and Coding Interview

                                    At Big Tech and high-growth startups, coding and systems design interviews are common - and fairly standard. A lot of people have asked me for preparation advice for these. Here is what I used when getting ready for an E5/E6 Facebook interview, and the one at Uber - where I was hired as a senior software engineer (L5). It's the same resources I recommend to people who are preparing for Big Tech or

                                      Preparing for the Systems Design and Coding Interview
                                    • I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance | Loren Stewart

                                      Performance context: All frameworks tested achieve excellent Lighthouse scores (100) with similar First Contentful Paint times. Since performance is essentially identical, bundle size is what differentiates these frameworks for mobile users. The 6.1x range matters for data usage, parse time, and battery consumption. Field data validation: The Chrome User Experience Report (CrUX) provides real-worl

                                      • Introducing Apple’s On-Device and Server Foundation Models

                                        At the 2024 , we introduced Apple Intelligence, a personal intelligence system integrated deeply into iOS 18, iPadOS 18, and macOS Sequoia. Apple Intelligence is comprised of multiple highly-capable generative models that are specialized for our users’ everyday tasks, and can adapt on the fly for their current activity. The foundation models built into Apple Intelligence have been fine-tuned for u

                                          Introducing Apple’s On-Device and Server Foundation Models
                                        • MLOps guide

                                          Update (Jan 11, 2025): I’m working on a minimum viable curriculum for ML/AI engineering. Here’s the interest form if you want to test out the curriculum. A collection of materials from introductory to advanced. This is roughly the path I’d follow if I were to start my MLOps journey again. Table of contents ML + engineering fundamentals MLOps …. Overview …. Intermediate …. Advanced Career Case stud

                                          • GitHub - srush/GPU-Puzzles: Solve puzzles. Learn CUDA.

                                            by Sasha Rush - srush_nlp GPU architectures are critical to machine learning, and seem to be becoming even more important every day. However, you can be an expert in machine learning without ever touching GPU code. It is hard to gain intuition working through abstractions. This notebook is an attempt to teach beginner GPU programming in a completely interactive fashion. Instead of providing text w

                                              GitHub - srush/GPU-Puzzles: Solve puzzles. Learn CUDA.
                                            • From GPT-2 to gpt-oss: Analyzing the Architectural Advances

                                              OpenAI just released their new open-weight LLMs this week: gpt-oss-120b and gpt-oss-20b, their first open-weight models since GPT-2 in 2019. And yes, thanks to some clever optimizations, they can run locally (but more about this later). This is the first time since GPT-2 that OpenAI has shared a large, fully open-weight model. Earlier GPT models showed how the transformer architecture scales. The

                                                From GPT-2 to gpt-oss: Analyzing the Architectural Advances
                                              • Everything a developer needs to know about Generative AI for SaaS

                                                Everything a developer needs to know about Generative AI for SaaS Few months ago, I knew almost nothing about AI. I used ChatGPT and Co-Pilot (I'm civilized, after all), but a lot of the content around AI was Greek to me. Terms like models, transformers, training, inference, RAG, attention, and agents were unfamiliar. Last week, I have completed my first end-to-end AI-based product: AI Code Assist

                                                  Everything a developer needs to know about Generative AI for SaaS
                                                • A Second Conversation with Werner Vogels – Communications of the ACM

                                                  CACM Web Account Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. Sign In Sign Up When I joined Amazon in 1998, the company had a single U.S.-based website selling only books and running a monolithic C application on five servers, a handful of Berkeley DBs for key/va

                                                  • Beyond the 70%: Maximizing the human 30% of AI-assisted coding

                                                    This is a follow-up to my article “The 70% problem: Hard truths about AI-assisted coding” AI coding assistants like Cursor, Cline, Copilot and WindSurf have transformed how software is built, shouldering much of the grunt work and boilerplate. Yet, as experienced developers and industry leaders note, there remains a crucial portion of software engineering that AI does not handle well – roughly tha

                                                      Beyond the 70%: Maximizing the human 30% of AI-assisted coding
                                                    • Real-world gen AI use cases from the world's leading organizations | Google Cloud Blog

                                                      AI is here, AI is everywhere: Top companies, governments, researchers, and startups are already enhancing their work with Google's AI solutions. Published April 12, 2024; last updated October 9, 2025. A year and a half ago, during Google Cloud Next 24, we published this list for the first time. It numbered 101 entries. It felt like a lot at the time, and served as a showcase of how much momentum b

                                                        Real-world gen AI use cases from the world's leading organizations | Google Cloud Blog
                                                      • Generative AI: A Creative New World

                                                        A powerful new class of large language models is making it possible for machines to write, code, draw and create with credible and sometimes superhuman results. Humans are good at analyzing things. Machines are even better. Machines can analyze a set of data and find patterns in it for a multitude of use cases, whether it’s fraud or spam detection, forecasting the ETA of your delivery or predictin

                                                          Generative AI: A Creative New World
                                                        • Andrej Karpathy — AGI is still a decade away

                                                          The Andrej Karpathy episode. Andrej explains why reinforcement learning is terrible (but everything else is much worse), why model collapse prevents LLMs from learning the way humans do, why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education. Watch on YouTube; listen on Apple Podcasts or Spotify

                                                            Andrej Karpathy — AGI is still a decade away
                                                          • II. From AGI to Superintelligence: the Intelligence Explosion - SITUATIONAL AWARENESS

                                                            AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual act

                                                              II. From AGI to Superintelligence: the Intelligence Explosion - SITUATIONAL AWARENESS
                                                            • Where Programming, Ops, AI, and the Cloud are Headed in 2021

                                                              In this report, we look at the data generated by the O’Reilly online learning platform to discern trends in the technology industry—trends technology leaders need to follow. But what are “trends”? All too often, trends degenerate into horse races over languages and platforms. Look at all the angst heating up social media when TIOBE or RedMonk releases their reports on language rankings. Those repo

                                                                Where Programming, Ops, AI, and the Cloud are Headed in 2021
                                                              • DeepSeek FAQ

                                                                It’s Monday, January 27. Why haven’t you written about DeepSeek yet? I did! I wrote about R1 last Tuesday. I totally forgot about that. I take responsibility. I stand by the post, including the two biggest takeaways that I highlighted (emergent chain-of-thought via pure reinforcement learning, and the power of distillation), and I mentioned the low cost (which I expanded on in Sharp Tech) and chip

                                                                  DeepSeek FAQ
                                                                • Annotated history of modern AI and deep neural networks

                                                                  For a while, DanNet enjoyed a monopoly. From 2011 to 2012 it won every contest it entered, winning four of them in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012).[GPUCNN5] In particular, at IJCNN 2011 in Silicon Valley, DanNet blew away the competition and achieved the first superhuman visual pattern recognition[DAN1] in an international contest. DanNet was also the first deep CNN to win

                                                                    Annotated history of modern AI and deep neural networks
                                                                  • Explaining my fast 6502 code generator

                                                                    To learn how optimizing compilers are made, I built one targeting the 6502 architecture. In a bizarre twist, my compiler generates faster code than GCC, LLVM, and every other compiler I compared it to. I reckon my compiler isn't doing more when it comes to high-level optimizations, so the gains must be from the code generation side. This makes sense, as most compilers are multi-target, with backen

                                                                    • Fantastic Learning Resources

                                                                      Fantastic Learning Resources Aug 6, 2023 People sometimes ask me: “Alex, how do I learn X?”. This article is a compilation of advice I usually give. This is “things that worked for me” rather than “the most awesome things on earth”. I do consider every item on the list to be fantastic though, and I am forever grateful to people putting these resources together. Learning to Code I don’t think I hav

                                                                      • The Junior Developer Extinction: We’re All Building the Next Programming Dark Age

                                                                        “I have not failed. I’ve just found 10,000 ways that won’t work.” — Thomas Edison Though to be fair, Edison never had to explain to his manager why the AI-generated light bulb stopped working, and nobody on the team understood the filament design. Picture this scene, familiar to anyone who’s conducted code reviews in the past year: A junior developer presents their pull request with the quiet conf

                                                                          The Junior Developer Extinction: We’re All Building the Next Programming Dark Age
                                                                        • Hacker News folk wisdom on visual programming

                                                                          I’m a fairly frequent Hacker News lurker, especially when I have some other important task that I’m avoiding. I normally head to the Active page (lots of comments, good for procrastination) and pick a nice long discussion thread to browse. So over time I’ve ended up with a good sense of what topics come up a lot. “The Bay Area is too expensive.” “There are too many JavaScript frameworks.” “Bootcam

                                                                            Hacker News folk wisdom on visual programming
                                                                          • Amazon Linux 2023 のRC版(RC0) が公開されました | DevelopersIO

                                                                            Amazon Linux 2 後継OS、Amazon Linux 2022 改め Amazon Linux 2023 の RC版 (RC0)が公開されました。 2023年2月22日付で、Amazon Linux 2022 改め Amazon Linux 2023 の RC版 (RC0)が公開されました。 Amazon Linux 2023 release notes update 2023-02-22 今回、2月24日に公開された Amazon Linux 2023 (RC0) の AMI を試す機会がありましたので、紹介させていただきます。 AMI AWS東京リージョンで 公開されている Amazon Linux 2023 のAMI (al2023-ami-2023.0.20230222.1-kernel-6.1-x86_64) を利用しました。 Amazon マシンイメージ (AMI)

                                                                              Amazon Linux 2023 のRC版(RC0) が公開されました | DevelopersIO
                                                                            • Unification-free ("keyword") type checking

                                                                              From my perspective, one of the biggest open problems in implementing programming languages is how to add a type system to the language without significantly complicating the implementation. For example, in my tutorial Fall-from-Grace implementation the type checker logic accounts for over half of the code. In the following lines of code report I’ve highlighted the modules responsible for type-che

                                                                                Unification-free ("keyword") type checking
                                                                              • What's new in Azure OpenAI in Azure AI Foundry Models?

                                                                                This article provides a summary of the latest releases and major documentation updates for Azure OpenAI. August 2025 GPT-5 models available gpt-5, gpt-5-mini, gpt-5-nano To learn more, see the getting started with reasoning models page. gpt-5-chat is now available. To learn more, see the models page Registration is required for access to the gpt-5 model. gpt-5-mini, gpt-5-nano, and gpt-5-chat do n

                                                                                  What's new in Azure OpenAI in Azure AI Foundry Models?
                                                                                • The Second Half

                                                                                  tldr: We’re at AI’s halftime. For decades, AI has largely been about developing new training methods and models. And it worked: from beating world champions at chess and Go, surpassing most humans on the SAT and bar exams, to earning IMO and IOI gold medals. Behind these milestones in the history book — DeepBlue, AlphaGo, GPT-4, and the o-series — are fundamental innovations in AI methods: search,