As requested by Yavor Angelov from MSFT via Twitter im writing down what seem to to be my biggest pains with QO in SqlServer.
1)
Whenever you have a a join between large tables QO will look at the indexes that exists on the join keys. If some indexes are missing it seems that the default join type chosen is Nested Loops which is OK for fast results but is really slow if the memory is scarce on the server or there is a high load.
Also this is hard to execute in parallel.
Usually in this situation I have to look at the execution plan and suggest the proper plan to QO by creating missing indexes.
I think creating a temporary index if the cardinality of the join tables is high would help a lot to change QP from nested loops to hash joins which can take advantage of multiple processors. Also the time to create the NC index that is missing is also very low.
2)
A lot of times it seems that the QO is not looking at the statistics to apply Selection first on small and limit the resultset that has to be joined with the other tables to a small number of rows.
The QO should calculate QP cost using different methods and then pick the QP with the smallest cost. This does not seem to happen very well especially when there is a mix of small and large tables to be joined.
To test these scenarios you can use a Virtual Server with minimum memory resources and you’ll see that the differences between QPs become very evident.