Answer: I think that there is no real difference of architecture, but difference of "exposed layers". Drill must have shuffle in some way to support joins, Spark has "Tree aggregate" which can be called "Dremel style aggregation tree". Difference is that Spark has "exposed" execution engine with ...
![Apache Spark SQL and Drill -- DAG vs. tree execution, value vector vs. Schema RDD format. What are the pros and cons?](https://cdn-ak-scissors.b.st-hatena.com/image/square/19c2a5dbbe1ab1e955d2a59e196fad40ee3436f9/height=288;version=1;width=512/https%3A%2F%2Fqph.cf2.quoracdn.net%2Fmain-custom-t-4346-600x315-obkjysfazioehetnozeisolgorqbouxn.jpeg)