並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 40 件 / 49件

新着順 人気順

latencyの検索結果1 - 40 件 / 49件

タグ検索の該当結果が少ないため、タイトル検索結果を表示しています。

latencyに関するエントリは49件あります。 awsnetworkネットワーク などが関連タグです。 人気エントリには 『AWS、高速起動にこだわった軽量なJavaScriptランタイム「LLRT」(Low Latency Runtime)をオープンソースで公開。AWS Lambdaでの利用にフォーカス』などがあります。
  • AWS、高速起動にこだわった軽量なJavaScriptランタイム「LLRT」(Low Latency Runtime)をオープンソースで公開。AWS Lambdaでの利用にフォーカス

    AWS、高速起動にこだわった軽量なJavaScriptランタイム「LLRT」(Low Latency Runtime)をオープンソースで公開。AWS Lambdaでの利用にフォーカス Amazon Web Services(AWS)は、実験的な実装としてサーバレス環境のAWS Lambdaで使うことにフォーカスした軽量なJavaScriptランタイム「LLRT」(Low Latency Runtime)をオープンソースで公開しました。 LLRTはRustで開発され、JavaScriptエンジンにはQuickJSを採用しています。 LLRTの最大の特徴は、現在のJavaScriptランタイムにおいて性能向上のために搭載されているJITコンパイラをあえて搭載せず、よりシンプルで軽量なランタイムとして実装することで高速に起動することにこだわっている点です。 これにより(Node.jsやDenoや

      AWS、高速起動にこだわった軽量なJavaScriptランタイム「LLRT」(Low Latency Runtime)をオープンソースで公開。AWS Lambdaでの利用にフォーカス
    • Introducing CloudFront Functions – Run Your Code at the Edge with Low Latency at Any Scale | Amazon Web Services

      AWS News Blog Introducing CloudFront Functions – Run Your Code at the Edge with Low Latency at Any Scale With Amazon CloudFront, you can securely deliver data, videos, applications, and APIs to your customers globally with low latency and high transfer speeds. To offer a customized experience and the lowest possible latency, many modern applications execute some form of logic at the edge. The use

        Introducing CloudFront Functions – Run Your Code at the Edge with Low Latency at Any Scale | Amazon Web Services
      • GitHub - awslabs/llrt: LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.

        You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

          GitHub - awslabs/llrt: LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
        • GitHub - ExistentialAudio/BlackHole: BlackHole is a modern macOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.

          BlackHole is a modern macOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency. License

            GitHub - ExistentialAudio/BlackHole: BlackHole is a modern macOS virtual audio driver that allows applications to pass audio to other applications with zero additional latency.
          • Kubernetes made my latency 10x higher

            Update: it looks this post has gotten way more attention than I anticipated. I’ve seen / received feedback that the title is misleading and some people get dissapointed. I see why, so at the risk of spoiling the surprise, let me clarify what this is about before starting. As we migrate teams over to Kubernetes, I’m observing that every time someone has an issue, like “latency went up after migrati

            • Introducing Amazon CloudFront KeyValueStore: A low-latency datastore for CloudFront Functions | Amazon Web Services

              AWS News Blog Introducing Amazon CloudFront KeyValueStore: A low-latency datastore for CloudFront Functions Amazon CloudFront allows you to securely deliver static and dynamic content with low latency and high transfer speeds. With CloudFront Functions, you can perform latency-sensitive customizations for millions of requests per second. For example, you can use CloudFront Functions to modify head

                Introducing Amazon CloudFront KeyValueStore: A low-latency datastore for CloudFront Functions | Amazon Web Services
              • GitHub - microsoft/garnet: Garnet is a remote cache-store from Microsoft Research that offers strong performance (throughput and latency), scalability, storage, recovery, cluster sharding, key migration, and replication features. Garnet can work with exis

                Garnet is a new remote cache-store from Microsoft Research, that offers several unique benefits: Garnet adopts the popular RESP wire protocol as a starting point, which makes it possible to use Garnet from unmodified Redis clients available in most programming languages of today, such as StackExchange.Redis in C#. Garnet offers much better throughput and scalability with many client connections an

                  GitHub - microsoft/garnet: Garnet is a remote cache-store from Microsoft Research that offers strong performance (throughput and latency), scalability, storage, recovery, cluster sharding, key migration, and replication features. Garnet can work with exis
                • Home | CloudPing - AWS Latency Monitoring

                  Africa (Cape Town) af-south-1 Asia Pacific (Hong Kong) ap-east-1 Asia Pacific (Tokyo) ap-northeast-1 Asia Pacific (Seoul) ap-northeast-2 Asia Pacific (Osaka) ap-northeast-3 Asia Pacific (Mumbai) ap-south-1 Asia Pacific (Singapore) ap-southeast-1 Asia Pacific (Sydney) ap-southeast-2 Canada (Central) ca-central-1 EU (Frankfurt) eu-central-1 EU (Stockholm) eu-north-1 EU (Milan) eu-south-1 EU (Ireland

                  • GitHub - paypal/junodb: JunoDB is PayPal's home-grown secure, consistent and highly available key-value store providing low, single digit millisecond, latency at any scale.

                    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                      GitHub - paypal/junodb: JunoDB is PayPal's home-grown secure, consistent and highly available key-value store providing low, single digit millisecond, latency at any scale.
                    • GitHub - apenwarr/blip: A tool for seeing your Internet latency. Try it at http://gfblip.appspot.com/

                      Go to http://gfblip.appspot.com/ It should work on any PC, laptop, tablet, phone, or iPod with javascript and HTML canvas support (which means almost everything nowadays). X axis is time. Y axis is milliseconds of latency. Green blips are your ping time to gstatic.com (a very fast site that should be close to you wherever you are). Blue blips are your ping time to apenwarr.ca ("a site on the Inter

                        GitHub - apenwarr/blip: A tool for seeing your Internet latency. Try it at http://gfblip.appspot.com/
                      • Reducing UDP latency

                        Hi! I’m one of Embox RTOS developers, and in this article I’ll tell you about one of the typical problems in the world of embedded systems and how we were solving it. Stating the problemControl and responsibility is a key point for a wide range of embedded systems. On the one hand, sensors and detectors must notify some other devices that some event occurred, on the other hand, other systems shoul

                          Reducing UDP latency
                        • Making Aurora Write Latency 15x Higher (or More!) by Choosing a Bad Primary Key

                          Primary Key design is an important thing for InnoDB performance, and choosing a poor PK definition will have an impact on performance and also write propagation in databases. When this comes to Aurora, this impact is even worse than you may notice. In short, we consider a poor definition of a Primary Key in InnoDB as “anything but quasi sequential values”, which may cause very random access to dat

                            Making Aurora Write Latency 15x Higher (or More!) by Choosing a Bad Primary Key
                          • OpenCensusとhttptrace.ClientTraceを使ってHTTPリクエストのlatencyを可視化する - oinume journal

                            はじめに みなさんこんにちは。これはGo5 Advent Calendar 2019の19日目の記事です。この記事はOpenCensusとhttptrace.ClientTraceを使ってHTTPリクエストの内部的なlatencyを可視化する話です。「内部的なlatency」というのは、HTTPリクエストの中で名前解決にどのぐらいかかったとか、コネクションを張るのにどのぐらいかかったなどです。 なお、この記事に記載しているコードは全てGitHub repositoryに上げてあります。 やりたいこと とあるアプリケーションでHTTP Clientを使ってHTTPリクエストを大量に送る処理がありました。そのサーバーはUS Regionで動いていて、そこからHTTPリクエストを日本にあるサーバーに送るというもので、この処理のlatencyが非常に気になっていました。そのため、HTTPリクエスト

                              OpenCensusとhttptrace.ClientTraceを使ってHTTPリクエストのlatencyを可視化する - oinume journal
                            • GitHub - kffl/speedbump: TCP proxy for simulating variable, yet predictable network latency :globe_with_meridians::hourglass_flowing_sand:

                              You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                                GitHub - kffl/speedbump: TCP proxy for simulating variable, yet predictable network latency :globe_with_meridians::hourglass_flowing_sand:
                              • Low latency tuning guide

                                This guide describes how to tune your AMD64/x86_64 hardware and Linux system for running real-time or low latency workloads. Example workloads where this type of tuning would be appropriate: Line rate packet capture Line rate deep packet inspection (DPI) Applications using kernel-bypass networking Accurate benchmarking of CPU bound programs The term latency in this context refers to the time betwe

                                • Latency Numbers Every Programmer Should Know

                                  Latency Numbers Every Programmer Should Know Visualisation by samwho, based on the work of Colin Scott.

                                    Latency Numbers Every Programmer Should Know
                                  • New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2 | Amazon Web Services

                                    AWS News Blog New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2 We know that you can always make great use of all available network bandwidth and network performance, and have done our best to supply it to you. Over the years, network bandwidth has grown from the 250 Mbps on the original m1 instance to 200 Gbps on the newest m6in instances. In addition to raw bandwidth, w

                                      New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2 | Amazon Web Services
                                    • Improving Latency with @defer and @stream Directives | GraphQL

                                      Rob Richard and Liliana Matos are front-end engineers at 1stDibs.com. They have been working with the GraphQL Working Group as champions of the @defer and @stream directives. The @defer and @stream directives have been a much anticipated set of features ever since Lee Byron first talked about it at GraphQL Europe 2016. For most of 2020, we have been working with the GraphQL Working Group to standa

                                      • Cloud Run adds min instances feature for latency-sensitive apps | Google Cloud Blog

                                        Cloud Run min instances: Minimize your serverless cold starts One of the great things about serverless is its pay-for-what-you-use operating model that lets you scale a service down to 0. But for a certain class of applications, the not-so-great thing about serverless is that it scales down to 0, resulting in latency to process the first request when your application wakes back up again. This so-c

                                          Cloud Run adds min instances feature for latency-sensitive apps | Google Cloud Blog
                                        • Code-splitting and minimal edge latency: the perfect match

                                          Code-splitting and minimal edge latency: the perfect matchFastly Fiddle, our code playground tool, is a React single-page app that uses the excellent Monaco IDE component that powers VS Code. Problem is, Monaco is huge. And most uses of Fiddle are read only. Do we really need to load a whole IDE to display some non-editable code? No! Is lazy loading code that's cached at the edge really fast? Yes!

                                            Code-splitting and minimal edge latency: the perfect match
                                          • Latency in Asynchronous Python

                                            This week I was debugging a misbehaving Python program that makes significant use of Python’s asyncio. The program would eventually take very long periods of time to respond to network requests. My first suspicion was a CPU-heavy coroutine hogging the thread, preventing the socket coroutines from running, but an inspection with pdb showed this wasn’t the case. Instead, the program’s author had mad

                                            • 2021-04-06のJS: TypeScript 4.3 Beta、hls.js v1.0.0(Apple Low-Latency HLS)、Storybook 6.2

                                              JSer.info #534 - TypeScript 4.3 Betaがリリースされました。 Announcing TypeScript 4.3 Beta | TypeScript 今までは、getterとsetterは同じ型であることが強制されていましたが、setterにgetterより幅広い型を指定できるようになっています。また、継承したメソッドをoverrideしていることを意味するoverrideキーワードが追加され、--noImplicitOverrideでチェックできるようになっています。 その他には、Template String Typeの改善、一部不完全だったPrivate Class Elements(fields/methods/accessors)のサポートが改善されています。 Private Class ElementsのサポートはRuntimeの変更も含まれる

                                                2021-04-06のJS: TypeScript 4.3 Beta、hls.js v1.0.0(Apple Low-Latency HLS)、Storybook 6.2
                                              • Using GHC low-latency garbage collection in production

                                                This is a guest post by Domen Kožar. In this post I’ll dive into how low-latency garbage collection (GC) has improved developer experience for Cachix users. The need for low latency Cachix serves the binary cache protocol for the Nix package manager. Before Nix builds a package, it will ask the binary cache if it contains the binary for a given package it wants to build. For a typical invocation o

                                                • ffmpeg で low latency DASH server 作ってみた - Qiita

                                                  これは何? 最近、cmaf(Common Media Application Format) を使った低遅延ライブについて、ちょこちょこ調べているんですが、手軽にオープンソースベースで試せる環境が見つけられなかったので、作ってみた ・・・ という POST です。 構成概要は、以下のような感じ まずは動かしてみる インストール ffmpeg のインストール 解説は、後に回すことにして、まずは動かし方。まず、ffmpeg をインストールします。(たぶん) v4.3.1 以降が必要です。 Ubuntu20.04

                                                    ffmpeg で low latency DASH server 作ってみた - Qiita
                                                  • Amazon Elastic File System Update – Sub-Millisecond Read Latency | Amazon Web Services

                                                    AWS News Blog Amazon Elastic File System Update – Sub-Millisecond Read Latency Amazon Elastic File System (Amazon EFS) was announced in early 2015 and became generally available in 2016. We launched EFS in order to make it easier for you to build applications that need shared access to file data. EFS is (and always has been) simple and serverless: you simply create a file system, attach it to any

                                                      Amazon Elastic File System Update – Sub-Millisecond Read Latency | Amazon Web Services
                                                    • AirPods Pro Bluetooth Latency - Stephen Coyle

                                                      A few years ago I wrote an article discussing the issues surrounding Bluetooth audio latency, and I feel like now is a good time to re-evaluate things, after a couple of generations of AirPods, and the new AirPods Pro. You can read the original article for more context, but the gist is that for a whole host of use cases, the delay between when a sound is triggered and when you hear it over a Bluet

                                                        AirPods Pro Bluetooth Latency - Stephen Coyle
                                                      • Latency numbers every frontend developer should know – Vercel

                                                        Web page load times and responsiveness to user action in web apps is a primary driver of user satisfaction–and both are often dominated by network latency. Latency itself is a function of the user's connection to the internet (Wifi, LTE, 5G), how far away the server is that the user is connecting to, and the quality of the network in between. While the latency numbers may seem low by themselves, t

                                                          Latency numbers every frontend developer should know – Vercel
                                                        • Introducing Falcon: a reliable low-latency hardware transport | Google Cloud Blog

                                                          Google opens Falcon, a reliable low-latency hardware transport, to the ecosystem At Google, we have a long history of solving problems at scale using Ethernet, and rethinking the transport layer to satisfy demanding workloads that require high burst bandwidth, high message rates, and low latency. Workloads such as storage have needed some of these attributes for a long time, however, with newer us

                                                            Introducing Falcon: a reliable low-latency hardware transport | Google Cloud Blog
                                                          • Terminal Latency

                                                            MotivationI’ve been a long-time user of Xterm. I tried to switch to other terminal emulators several times because of Xterm’s broken Unicode support, especially regarding glyphs/emojis and multi-font substitution. These glyphs are part of many modern CLI tools and are often printed as blank squares in Xterm. More recently, I attempted to switch again, but every time I try, I’m discouraged by the a

                                                            • Azure network round-trip latency statistics

                                                              Azure continuously monitors the latency (speed) of core areas of its network using internal monitoring tools and measurements. How are the measurements collected? The latency measurements are collected from Azure cloud regions worldwide, and continuously measured in 1-minute intervals by network probes. The monthly latency statistics are derived from averaging the collected samples for the month.

                                                                Azure network round-trip latency statistics
                                                              • How Discord Supercharges Network Disks for Extreme Low Latency

                                                                It's no secret that Discord has become your place to talk; the 4 billion messages sent through the platform by millions of people per day have us convinced. But text chat only accounts for a chunk of the features that Discord supports. Server roles, custom emojis, video calls, and more all contribute to the hundreds of terabytes of data we serve to our users.† To provide this enormous amount of da

                                                                  How Discord Supercharges Network Disks for Extreme Low Latency
                                                                • New for Amazon SQS – Update the AWS SDK to reduce latency | Amazon Web Services

                                                                  AWS News Blog New for Amazon SQS – Update the AWS SDK to reduce latency With Amazon SQS, you can send and receive messages between software components at any scale. It was one of the first AWS services I used and as a Solutions Architect, I helped many customers take advantage of asynchronous communications using message queues. In fact, Amazon SQS has been generally available since July 2006 and,

                                                                    New for Amazon SQS – Update the AWS SDK to reduce latency | Amazon Web Services
                                                                  • Introduction To Low Latency Programming: Minimize Branching And Jumping

                                                                    This post originally appears as a chapter in my new book: ‘Introduction To Low Latency Programming’, a short and approachable entry into the subject. Available now for purchase on Amazon. This chapter will discuss how branching and jumping in our code affects our runtime performance and how we can avoid them in our effort to reduce the latency of our programs. Branching refers to process execution

                                                                    • Investigating the impact of HTTP3 on network latency for search

                                                                      Investigating the impact of HTTP3 on network latency for search Dropbox is well known for storing users’ files—but it’s equally important we can retrieve content quickly when our users need it most. For the Retrieval Experiences team, that means building a search experience that is as fast, simple, and powerful as possible. But when we conducted a research study in July 2022, one of the most commo

                                                                        Investigating the impact of HTTP3 on network latency for search
                                                                      • AWS Cost Anomaly Detection reduces anomaly detection latency by up to 30%

                                                                        Starting today, AWS Cost Anomaly Detection will detect cost anomalies up to 30% faster. Customers can now identify and respond to spending changes more quickly. Cost Anomaly Detection leverages advanced machine learning to identify unusual changes in spend, enabling customers to quickly take action to avoid unexpected costs. With this new capability, AWS Cost Anomaly Detection analyzes cost and us

                                                                          AWS Cost Anomaly Detection reduces anomaly detection latency by up to 30%
                                                                        • 表示遅延測定ツール「PC Gaming Latency Tester」を特注した話 - ゲーミングモニタの選び方[番外編] : 自作とゲームと趣味の日々

                                                                          2021年04月07日00:00 (注:商品価格は執筆当時のものです。販売ページリンクにはアフィリエイトを含みます) 表示遅延測定ツール「PC Gaming Latency Tester」を特注した話 - ゲーミングモニタの選び方[番外編] wisteriear コメント(0) タグ :#レビュー#モニタ_レビュー#ゲーミングモニタの選び方 スポンサードリンク ハイリフレッシュレートなゲーミングモニタのレビューで重要な表示遅延を簡単かつ正確に測定するため、表示遅延測定ツール「PC Gaming Latency Tester」をワンオフで特注しました。 ゲーミングモニタ、高RRディスプレイ搭載ゲーミングモバイル、ビデオキャプチャなど今後、当サイトで執筆するレビュー記事で使用するので参考文献的に紹介します。 表示遅延について まず大前提、”表示遅延”とは何かというと、マウス・キーボード・ゲーム

                                                                            表示遅延測定ツール「PC Gaming Latency Tester」を特注した話 - ゲーミングモニタの選び方[番外編] : 自作とゲームと趣味の日々
                                                                          • Stackdriver MonitoringのTotal Latencyメトリクスがどう集計されているのか解明してみる - Qiita

                                                                            TL; DR https/total_latencies メトリクスは DISTRIBUTION型で、集計元のデータが既にヒストグラム ヒストグラムのALIGN_SUMはヒストグラムのマージ(だと思われる) https/total_latenciesの集計はALIGN_SUM+REDUCE_PERCENTILE_99が良さそう 事の発端 Stackdriver MonitoringのDashboardsは標準でGoogle Cloud Load Balancersなどのダッシュボードを用意してくれて、 レスポンスタイム(Total Latency)やステータスコード(Response by Response Code Class)などが見える 用意されているTotal Latencyの設定はこんな感じになっており、 これを丸コピ参考にしてアラートポリシーを作っていた(最初にWebConso

                                                                              Stackdriver MonitoringのTotal Latencyメトリクスがどう集計されているのか解明してみる - Qiita
                                                                            • Low Latency Optimization: Understanding Huge Pages (Part 1)

                                                                              Latency is often a crucial factor in algorithmic trading. At HRT, a lot of effort goes into minimizing the latency of our trading stack. Low latency optimizations can be arcane, but fortunately there are a lot of very good guides and documents to get started. One important aspect that is not often discussed in depth is the role of huge pages and the translation lookaside buffer (TLB). In this seri

                                                                                Low Latency Optimization: Understanding Huge Pages (Part 1)
                                                                              • GitHub - cloudflare/lol-html: Low output latency streaming HTML parser/rewriter with CSS selector-based API

                                                                                You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                                                                                  GitHub - cloudflare/lol-html: Low output latency streaming HTML parser/rewriter with CSS selector-based API
                                                                                • OBSとWowza Streaming Engineを使ってApple Low-Latency HLSライブストリーミング配信してみる | DevelopersIO

                                                                                  AWS上に構築したWowza Streaming Engineを使い、Apple Low Latency HLS でライブストリーミング配信してみます。 どれぐらい遅延が解消されるのか気になります。 構成 Apple Low Latency HLSに対応したWowza Streaming EngineをEC2上に構築します(構築手順は割愛)。 OSはAmazon Linux2です。 Wowza StreamingEngineをインストール に沿ってインストールしました。 ライセンスは無料トライアルのものを使います。 Wowzaのアカウントも必要ですので、free-trialからWowza Streaming Engineのトライアルを申し込みます。 また、 aws marcketplaceから構築することも可能です。 THEOPlayer Low Latency HLSを再生できるプレイヤ

                                                                                    OBSとWowza Streaming Engineを使ってApple Low-Latency HLSライブストリーミング配信してみる | DevelopersIO

                                                                                  新着記事