> Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.I wish people would follow this rule and just let stuff work. I recently encountered the most extreme version of this I've ever seen in my career: a design review where a guy proposed a R
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+. It is one of the fastest Python frameworks available, as measured by independent benchmarks.It is based on standard Python type hints. Using them, you get automatic data validation, serialization, and documentation. Including deeply nested JSON documents. And you get editor completion and checks everywh
Size is such a tiny concern. I'm surprised people make such a big deal about it. When all of your images use the same base, it's only a one-time cost anyway.And there are FAR more important concerns: - Are the packages in your base system well maintained and updated with security fixes? - Does your base system have longevity? Will it still be maintained a few years from now? - Does it handle all o
The relevant change:> This directory contains source code to an experimental "version 4" of SQLite that was being developed between 2012 and 2014. > All development work on SQLite4 has ended. The experiment has concluded. > Lessons learned from SQLite4 have been folded into SQLite3 which continues to be actively maintained and developed. This repository exists as an historical record. There are no
This "modern" Spanner feels very different from the one we saw in 2012 [1]. Some interesting takeaways:* There is a native SQL interface in Spanner, rather than relying on a separate upper-layer SQL layer, a la F1 [2] * Spanner is no longer on top of Bigtable! Instead, the storage engine seems to be a heavily modified Bigtable with a column-oriented file format * Data is resharded frequently and c
ElephantDB seems to be a minimalist DB made for the very specific task of serving MapReduce results from Hadoop - it doesn't even support writes to the DB (which is fair enough as then persisting data once the MapReduce results get updated would be quite painful).The main thing is it gets rid of the painful step of having to set up another datastore, run MapReduce, take the result and store it in
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く