Introducing the Confluent Operator: Apache Kafka on Kubernetes Made Simple At Confluent, our mission is to put a Streaming Platform at the heart of every digital company in the world. This means, making it easy to deploy and use Apache Kafka and Confluent Platform—the de-facto Streaming Platform—across a variety of infrastructure environments. In the last few years, the rise of Kubernetes as the c
2. Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 無断引用・転載禁止 2 自己紹介 栗原 望 経歴: ▪ 2012/04 ヤフー株式会社に新卒入社 ▪ 2012/10 ユーザーの属性情報に関連する社内向けプラットフォームの開発 ▪ 2015/07 ヤフオクのBEシステム再構築 ▪ 2016/10 「Pulsar」を使った社内向けメッセージングプラットフォームの開発 ▪ 2017/06~ 「Pulsar」のコミッター 趣味: ▪ ぷよぷよテトリス ▪ ボードゲームいろいろ 3. Copyright (C) 2018 Yahoo Japan Corporation. All Rights Reserved. 無断引用・転載禁止 3 Apache Pulsar Yahoo! Inc.(現:Oath)で開発された
Yuto Kawamura from LINE Corporation presented on their use of Apache Kafka clusters to provide multitenancy for different internal teams. They face challenges in ensuring isolation between client workloads and preventing abusive clients. Their solutions include request quotas to limit client resource usage, slow logs to identify slow requests, and changes to the broker code to pre-warm caches and
Get emerging insights on innovative technology straight to your inbox. At Banzai Cloud we are building a cloud agnostic, open source next generation CloudFoundry/Heroku-like PaaS, Pipeline, while running several big data workloads natively on Kubernetes. Apache Kafka is one of the cloud native workloads we support out-of-the-box, alongside Apache Spark and Apache Zeppelin. If you’re interested in
Squeezing the firehose: getting the most from Kafka compression2018-03-05 We at Cloudflare are long time Kafka users, first mentions of it date back to beginning of 2014 when the most recent version was 0.8.0. We use Kafka as a log to power analytics (both HTTP and DNS), DDOS mitigation, logging and metrics. Firehose CC BY 2.0 image by RSLab While the idea of unifying abstraction of the log remain
Should You Put Several Event Types in the Same Kafka Topic? If you adopt a streaming platform such as Apache Kafka, one of the most important questions to answer is: what topics are you going to use? In particular, if you have a bunch of different events that you want to publish to Kafka as messages, do you put them in the same topic, or do you split them across different topics? The most importan
Webinar: Akka 24.05 release highlights Tyler Jewell, CEO, Jonas Boner, founder and CTO, and Michael Nash, CISO, delve into the value and power of these new features and enhancements. Q&A to follow. Lightbend aims to democratize distributed systems for developers Tyler Jewell, CEO of Lightbend, talks about some of the key challenges developers are up against with distributed systems and how Lightbe
Apache Kafka: Producer, Broker and Consumer2017年は生まれて始めてApache Kafkaを本格的に業務利用(PoCではなく本番運用)した年でした。Apache Kafka的なメッセージングミドルウェアそのもののは、社内的な事情でよく使っていたのでその使い勝手に対して困惑はほとんど無かったですし、ミドルウェアとして非常に安定しているため、Kafkaクラスタそのものでの不具合らしい不具合が発生したことは一度もありませんでした。 しかし、Kafkaのトピック設計などに関してのベストプラクティスは事例ベースでもあまり見かけたことがなく、チームメンバーと悩むことも多かったです。このストーリーでは、主にKafkaを利用したアプリ設計で考えたことや失敗したことを振り返りつつ共有します。なお、パーティション数や各種バッファサイズなどのチューニング要素は今回取
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main conc
Metrics Are Not Enough: Monitoring Apache Kafka and Streaming Applications 1) Apache Kafka is a distributed system with many moving parts to monitor, including brokers, topics, partitions, and the applications that use Kafka. It is critical to monitor Kafka performance to ensure high availability and catch problems early. 2) Key metrics to monitor include partition replication, broker resource usa
Spark Streaming has supported Kafka since it’s inception, but a lot has changed since those times, both in Spark and Kafka sides, to make this integration more fault-tolerant and reliable.Apache Kafka 0.10 (actually since 0.9) introduced the new Consumer API, built on top of a new group coordination protocol provided by Kafka itself. So a new Spark Streaming integration comes to the playground, wi
It has been seven years since we first set out to create the distributed streaming platform we know now as Apache Kafka®. Born initially as a highly scalable messaging system, Apache Kafka has evolved over the years into a full-fledged distributed streaming platform for publishing and subscribing, storing, and processing streaming data at scale and in real-time. Since we first open-sourced Apache
Kafkaのコンシューマグループに、コンシューマを追加すると、各コンシューマがどのパーティションを読み取るか再割り当てするリバランスが行われる。 Kafka0.11.0.1を対象に、以下の観点でKafkaの実装を調べてみた結果をまとめる。 リバランスはいつ行われるのか なぜ新規コンシューマ追加しても、すぐにメッセージ配信が始まらないのか リバランスが起こった時に重複メッセージ処理は発生しないか リバランスはいつ行われるのか KafkaConsumer.poll()メソッドの内部で行われる。 Kafkaのコンシューマの実装はだいたい以下のようになる。コード全体はgist参照。 Kafka Comsumer Sample · GitHub KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties()); co
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く