It’s unfortunate that the post focuses mostly on the usage of Spring and RabitMQ and the slidedeck doesn’t dive deeper into the architecture, data flows, and data stores, but the diagrams below should give you an idea of this truly polyglot persistentency architecture: The slide deck presenting architecture principles and numbers about the platform after the break.
A very long post by Richard McDougall explaining why virtualizing Hadoop may make sense and how VMware’s Project Serengeti can help. Answering the question in the title, McDougall enumerates 6 reasons: Consolidation/sharing of a big-data platformRapid provisioningResource sharingHigh availabilitySecurityVersioned Hadoop environmentsHe’s also addressing two of the most common questions about Hadoop
BigCache: BigCache addresses this problem by persisting the cached data in memory within the same JVM process, but outside the JVM heap. This prevents the Garbage Collector from interacting with the cache’s memory zone, allowing the JVM heap size to be scaled based on processing needs only. While this solution is slightly slower than in-heap data access, it is faster than disk or network data tran
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く