his document provides overview of how the query planner and optimizer for sqlite works.
Given a single sql statement,there might be dozens,hundreds,or even thousands of ways to implement that statement,depending on the complexity of the statement itself and of the underlying database schema. The task of the query planner is to select an algorithm from among the many choices that provides the answer with a minimum of disk I/O and cpu overhead.
1.0 WHERE clause analysis
The WHERE clause on a query is broken up into "terms" where each term is separated from the others by an AND operator. If the WHERE clause is composed of constraints separate by the OR operator then the entire clause is considered to be a single "term" to which theOR-clause optimizationis applied.
All terms of the WHERE clause are analyzed to see if they can be satisfied using indices. Terms that cannot be satisfied through the use of indices become tests that are evaluated against each row of the relevant input tables. No tests are done for terms that are completely satisfied by indices. Sometimes one or more terms will provide hints to indices but still must be evaluated against each row of the input tables.
The analysis of a term might cause new "virtual" terms to be added to the WHERE clause. Virtual terms can be used with indices to restrict a search. But virtual terms never generate code that is tested against input rows.
To be usable by an index a term must be of one of the following forms:
@H_502_20@column = @H_502_20@expression @H_502_20@column > @H_502_20@expression @H_502_20@column >= @H_502_20@expression @H_502_20@column < @H_502_20@expression @H_502_20@column <= @H_502_20@expression @H_502_20@expression = @H_502_20@column @H_502_20@expression > @H_502_20@column @H_502_20@expression >= @H_502_20@column @H_502_20@expression < @H_502_20@column @H_502_20@expression <= @H_502_20@column @H_502_20@column IN (@H_502_20@expression-list) @H_502_20@column IN (@H_502_20@subquery) @H_502_20@column IS NULL
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index (columns a,and so forth) appear in WHERE clause terms. The initial columns of the index must be used with the=orINoperators. The right-most column that is used can employ inequalities. For the right-most column of an index that is used,there can be up to two inequalities that must sandwich the allowed values of the column between two extremes.
It is not necessary for every column of an index to appear in a WHERE clause term in order for that index to be used. But there can not be gaps in the columns of the index that are used. Thus for the example index above,if there is no WHERE clause term that constraints column c,then terms that constrain columns a and b can be used with the index but not terms that constraint columns d through z. Similarly,no index column will be used (for indexing purposes) that is to the right of a column that is constrained only by inequalities.
1.1 Index term usage examples
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,2,3) AND c IS NULL AND d='hello'
The first four columns a,and d of the index would be usable since those four columns form a prefix of the index and are all bound by equality constraints.
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,3) AND c>12 AND d='hello'
Only columns a,and c of the index would be usable. The d column would not be usable because it occurs to the right of c and c is constrained only by inequalities.
For the index above and WHERE clause like this:
... WHERE a=5 AND b IN (1,3) AND d='hello'
Only columns a and b of the index would be usable. The d column would not be usable because column c is not constrained and there can be no gaps in the set of columns that usable by the index.
For the index above and WHERE clause like this:
... WHERE b IN (1,3) AND c NOT NULL AND d='hello'
The index is not usable at all because the left-most column of the index (column "a") is not constrained.) Assuming there are no other indices,the query above would result in a full table scan.
For the index above and WHERE clause like this:
... WHERE a=5 OR b IN (1,3) OR c NOT NULL OR d='hello'
The index is not usable because the WHERE clause terms are connected by OR instead of AND. This query would result in a full table scan. However,if three additional indices where added that contained columns b,and d as their left-most columns,then theOR-clause optimizationmight apply.
2.0 The BETWEEN optimization
If a term of the WHERE clause is of the following form:
@H_502_20@expr1 BETWEEN @H_502_20@expr2 AND @H_502_20@expr3
Then two virtual terms are added as follows:
@H_502_20@expr1 >= @H_502_20@expr2 AND @H_502_20@expr1 <= @H_502_20@expr3
If both virtual terms end up being used as constraints on an index,then the original BETWEEN term is omitted and the corresponding test is not performed on input rows. Thus if the BETWEEN term ends up being used as an index constraint no tests are ever performed on that term. On the other hand,the virtual terms themselves never causes tests to be performed on input rows. Thus if the BETWEEN term is not used as an index constraint and instead must be used to test input rows,the@H_502_20@expr1expression is only evaluated once.
3.0 OR optimizations
WHERE clause constraints that are connected by OR instead of AND are handled in one of two way. If a term consists of multiple subterms containing a common column name and separated by OR,like this:
@H_502_20@column = @H_502_20@expr1 OR @H_502_20@column = @H_502_20@expr2 OR @H_502_20@column = @H_502_20@expr3 OR ...
Then that term is rewritten as follows:
@H_502_20@column IN (@H_502_20@expr1,@H_502_20@expr2,@H_502_20@expr3,@H_502_20@expr4,...)
The rewritten term then might go on to constrain an index using the normal rules forINoperators. Note that@H_502_20@columnmust be the same column in every OR-connected subterm,although the column can occur on either the left or the right side of the=operator.
If and only if the prevIoUsly described conversion of OR to an IN operator does not work,the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows:
@H_502_20@expr1 OR @H_502_20@expr2 OR @H_502_20@expr3
Individual subterms might be a single comparison expression likea=5orx>yor they can be LIKE or BETWEEN expressions,or a subterm can be a parenthesized list of AND-connected sub-subterms. Each subterm is analyzed as if it were itself the entire WHERE clause in order to see if the subterm is indexable by itself. Ifeverysubterm of an OR clause is separately indexable then the OR clause might be coded such that a separate index is used to evaluate each term of the OR clause. One way to think about how sqlite uses separate indices foreach each OR clause term is to imagine that the WHERE clause where rewritten as follows:
rowid IN (SELECT rowid FROM @H_502_20@table WHERE @H_502_20@expr1 UNION SELECT rowid FROM @H_502_20@table WHERE @H_502_20@expr2 UNION SELECT rowid FROM @H_502_20@table WHERE @H_502_20@expr3)
The rewritten expression above is conceptual; WHERE clauses containing OR are not really rewritten this way. The actual implementation of the OR clause uses a mechanism that is more efficient than subqueries and which works even for tables where the "rowid" column name has been overloaded for other uses and no longer refers to the real rowid. But the essence of the implementation is captured by the statement above: Separate indices are used to find candidate result rows from each OR clause term and the final result is the union of those rows.
Note that in most cases,sqlite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause,a different index might be used for each subterm in the OR-clause.
For any given query,the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. sqlite uses a cost-based query planner that estimates the cpu and disk I/O costs of varIoUs competing query plans and chooses the plan that it thinks will be the fastest. If there are many OR terms in the WHERE clause or if some of the indices on individual OR-clause subterms are not very selective,then sqlite might decide that it is faster to use a different query algorithm,or even a full-table scan. Application developers can use theEXPLAIN QUERY PLANprefix on a statement to get a high-level overview of the chosen query strategy.
4.0 The LIKE optimization
Terms that are composed of theLIKEorGLOBoperator can sometimes be used to constrain indices. There are many conditions on this use:
- The left-hand side of the LIKE or GLOB operator must be the name of an indexed column withTEXT affinity.
- The right-hand side of the LIKE or GLOB must be either a string literal or aparameterbound to a string literal that does not begin with a wildcard character.
- The ESCAPE clause cannot appear on the LIKE operator.
- The build-in functions used to implement LIKE and GLOB must not have been overloaded using the sqlite3_create_function() API.
- For the GLOB operator,the column must be indexed using the built-in BINARY collating sequence.
- For the LIKE operator,ifcase_sensitive_likemode is enabled then the column must indexed using BINARY collating sequence,or ifcase_sensitive_likemode is disabled then the column must indexed using built-in NOCASE collating sequence. @H_418_403@
- The subquery and the outer query do not both use aggregates.
- The subquery is not an aggregate or the outer query is not a join.
- The subquery is not the right operand of a left outer join.
- The subquery is not DISTINCT or the outer query is not a join.
- The subquery is not DISTINCT or the outer query does not use aggregates.
- The subquery does not use aggregates or the outer query is not DISTINCT.
- The subquery has a FROM clause.
- The subquery does not use LIMIT or the outer query is not a join.
- The subquery does not use LIMIT or the outer query does not use aggregates.
- The subquery does not use aggregates or the outer query does not use LIMIT.
- The subquery and the outer query do not both have ORDER BY clauses.
- The subquery and outer query do not both use LIMIT.
- The subquery does not use OFFSET.
- The outer query is not part of a compound select or the subquery does not have both an ORDER BY and a LIMIT clause.
- The outer query is not an aggregate or the subquery does not contain ORDER BY.
- The sub-query is not a compound select,or it is a UNION ALL compound clause made up entirely of non-aggregate queries,and the parent query:
- is not itself part of a compound select,
- is not an aggregate or DISTINCT query,and
- has no other tables or sub-selects in the FROM clause.
- If the sub-query is a compound select,then all terms of the ORDER by clause of the parent must be simple references to columns of the sub-query.
- The subquery does not use LIMIT or the outer query does not have a WHERE clause.
- If the sub-query is a compound select,then it must not use an ORDER BY clause. @H_418_403@
The LIKE operator has two modes that can be set by apragma. The default mode is for LIKE comparisons to be insensitive to differences of case for latin1 characters. Thus,by default,the following expression is true:
'a' LIKE 'A'
But if the case_sensitive_like pragma is enabled as follows:
PRAGMA case_sensitive_like=ON;
Then the LIKE operator pays attention to case and the example above would evaluate to false. Note that case insensitivity only applies to latin1 characters - basically the upper and lower case letters of English in the lower 127 byte codes of ASCII. International character sets are case sensitive in sqlite unless a application-definedcollating sequenceandlike() SQL functionare provided that take non-ASCII characters into account. But if an application-defined collating sequence and/or like() sql function are provided,the LIKE optimization described here will never be taken.
The LIKE operator is case insensitive by default because this is what the sql standard requires. You can change the default behavior at compile time by using theSQLITE_CASE_SENSITIVE_LIKEcommand-line option to the compiler.
The LIKE optimization might occur if the column named on the left of the operator is indexed using the built-in BINARY collating sequence and case_sensitive_like is turned on. Or the optimization might occur if the column is indexed using the built-in NOCASE collating sequence and the case_sensitive_like mode is off. These are the only two combinations under which LIKE operators will be optimized.
The GLOB operator is always case sensitive. The column on the left side of the GLOB operator must always use the built-in BINARY collating sequence or no attempt will be made to optimize that operator with indices.
The LIKE optimization will only be attempted if the right-hand side of the GLOB or LIKE operator is either literal string or aparameterthat has beenboundto a string literal. The string literal must not begin with a wildcard; if the right-hand side begins with a wildcard character then this optimization is attempted. If the right-hand side is aparameterthat is bound to a string,then this optimization is only attempted if theprepared statementcontaining the expression was compiled withsqlite3_prepare_v2()orsqlite3_prepare16_v2(). The LIKE optimization is not attempted if the right-hand side is aparameterand the statement was prepared usingsqlite3_prepare()orsqlite3_prepare16(). The LIKE optimization is not attempted if there is an EXCEPT phrase on the LIKE operator.
Suppose the initial sequence of non-wildcard characters on the right-hand side of the LIKE or GLOB operator is@H_502_20@x. We are using a single character to denote this non-wildcard prefix but the reader should understand that the prefix can consist of more than 1 character. Let@H_502_20@ybe the smallest string that is the same length as /x/ but which compares greater than@H_502_20@x. For example,if@H_502_20@xishellothen@H_502_20@ywould behellp. The LIKE and GLOB optimizations consist of adding two virtual terms like this:
@H_502_20@column >= @H_502_20@x AND @H_502_20@column < @H_502_20@y
Under most circumstances,the original LIKE or GLOB operator is still tested against each input row even if the virtual terms are used to constrain an index. This is because we do not know what additional constraints may be imposed by characters to the right of the@H_502_20@xprefix. However,if there is only a single global wildcard to the right of@H_502_20@x,then the original LIKE or GLOB test is disabled. In other words,if the pattern is like this:
@H_502_20@column LIKE @H_502_20@x% @H_502_20@column GLOB @H_502_20@x*
then the original LIKE or GLOB tests are disabled when the virtual terms constrain an index because in that case we know that all of the rows selected by the index will pass the LIKE or GLOB test.
Note that when the right-hand side of a LIKE or GLOB operator is aparameterand the statement is prepared usingsqlite3_prepare_v2()orsqlite3_prepare16_v2()then the statement is automatically reparsed and recompiled on the firstsqlite3_step()call of each run if the binding to the right-hand side parameter has changed since the prevIoUs run. This reparse and recompile is essentially the same action that occurs following a schema change. The recompile is necessary so that the query planner can examine the new value bound to the right-hand side of the LIKE or GLOB operator and determine whether or not to employ the optimization described above.
5.0 Joins
The ON and USING clauses of a inner join are converted into additional terms of the WHERE clause prior to WHERE clause analysis described above in paragraph 1.0. Thus with sqlite,there is no computational advantage to use the newer sql92 join Syntax over the older sql89 comma-join Syntax. They both end up accomplishing exactly the same thing on inner joins.
For a LEFT OUTER JOIN the situation is more complex. The following two queries are not equivalent:
SELECT * FROM tab1 LEFT JOIN tab2 ON tab1.x=tab2.y; SELECT * FROM tab1 LEFT JOIN tab2 WHERE tab1.x=tab2.y;
For an inner join,the two queries above would be identical. But special processing applies to the ON and USING clauses of an OUTER join: specifically,the constraints in an ON or USING clause do not apply if the right table of the join is on a null row,but the constraints do apply in the WHERE clause. The net effect is that putting the ON or USING clause expressions for a LEFT JOIN in the WHERE clause effectively converts the query to an ordinary INNER JOIN - albeit an inner join that runs more slowly.
5.1 Order of tables in a join
The current implementation of sqlite uses only loop joins. That is to say,joins are implemented as nested loops.
The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. However,sqlite will nest the loops in a different order if doing so will help it to select better indices.
Inner joins can be freely reordered. However a left outer join is neither commutative nor associative and hence will not be reordered. Inner joins to the left and right of the outer join might be reordered if the optimizer thinks that is advantageous but the outer joins are always evaluated in the order in which they occur.
When selecting the order of tables in a join,sqlite uses a greedy algorithm that runs in polynomial (O(N²)) time. Because of this,sqlite is able to efficiently plan queries with 50- or 60-way joins.
Join reordering is automatic and usually works well enough that programmers do not have to think about it,especially ifANALYZEhas been used to gather statistics about the available indices. But occasionally some hints from the programmer are needed. Consider,for example,the following schema:
CREATE TABLE node( id INTEGER PRIMARY KEY,name TEXT ); CREATE INDEX node_idx ON node(name); CREATE TABLE edge( orig INTEGER REFERENCES node,dest INTEGER REFERENCES node,PRIMARY KEY(orig,dest) ); CREATE INDEX edge_idx ON edge(dest,orig);
The schema above defines a directed graph with the ability to store a name at each node. Now consider a query against this schema:
SELECT * FROM edge AS e,node AS n1,node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in sqlite has basically two choices on how to implement this query. (There are actually six different choices,but we will only consider two of them here.) Pseudocode below demonstrating these two choices.
Option 1:
foreach n1 where n1.name='alice' do: foreach n2 where n2.name='bob' do: foreach e where e.orig=n1.id and e.dest=n2.id return n1.*,n2.*,e.* end end end
Option 2:
foreach n1 where n1.name='alice' do: foreach e where e.orig=n1.id do: foreach n2 where n2.id=e.dest and n2.name='bob' do: return n1.*,e.* end end end
The same indices are used to speed up every loop in both implementation options. The only difference in these two query plans is the order in which the loops are nested.
So which query plan is better? It turns out that the answer depends on what kind of data is found in the node and edge tables.
Let the number of alice nodes be M and the number of bob nodes be N. Consider two scenarios. In the first scenario,M and N are both 2 but there are thousands of edges on each node. In this case,option 1 is preferred. With option 1,the inner loop checks for the existence of an edge between a pair of nodes and outputs the result if found. But because there are only 2 alice and bob nodes each,the inner loop only has to run 4 times and the query is very quick. Option 2 would take much longer here. The outer loop of option 2 only executes twice,but because there are a large number of edges leaving each alice node,the middle loop has to iterate many thousands of times. It will be much slower. So in the first scenario,we prefer to use option 1.
Now consider the case where M and N are both 3500. Alice nodes are abundant. But suppose each of these nodes is connected by only one or two edges. In this case,option 2 is preferred. With option 2,the outer loop still has to run 3500 times,but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop,if at all. So the total number of iterations of the inner loop is around 7000. Option 1,on the other hand,has to run both its outer loop and its middle loop 3500 times each,resulting in 12 million iterations of the middle loop. Thus in the second scenario,option 2 is nearly 2000 times faster than option 1.
So you can see that depending on how the data is structured in the table,either query plan 1 or query plan 2 might be better. Which plan does sqlite choose by default? As of version 3.6.18,without runningANALYZE,sqlite will choose option 2. But if theANALYZEcommand is run in order to gather statistics,a different choice might be made if the statistics indicate that the alternative is likely to run faster.
5.2 Manual Control Of Query Plans
sqlite provides the ability for advanced programmers to exercise control over the query plan chosen by the optimizer. One method for doing this is to fudge theANALYZEresults in thesqlite_stat1andsqlite_stat2tables. That approach is not recommended except for the one scenario described in the following paragraph.
For an program that uses an sqlite database as its application file format,when a new database instances is first created theANALYZEcommand is ineffective because the database contain no data from which to gather statistics. In that case,one could construct a large prototype database containing typical data during development and run theANALYZEcommand on this prototype database to gather statistics,then save the prototype statistics as part of the application. After deployment,when the application goes to create a new database file,it can run theANALYZEcommand in order to create thesqlite_stat1andsqlite_stat2tables,then copy the precomputed statistics obtained from the prototype database into these new statistics tables. In that way,statistics from large working data sets can be preloaded into newly created application files.
If you really must take manual control of join loop nesting order,the preferred method is to use some peculiar (though valid) sql Syntax to specify the join. If you use the keyword CROSS in a join,then the two tables connected by that join will not be reordered. So in the query,the optimizer is free to reorder the tables of the FROM clause anyway it sees fit:
SELECT * FROM node AS n1,edge AS e,node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
But in the following logically equivalent formulation of the query,the substitution of "CROSS JOIN" for the "," means that the order of tables must be N1,E,N2.
SELECT * FROM node AS n1 CROSS JOIN edge AS e CROSS JOIN node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id;
Hence,in the second form,the query plan must be option 2. Note that you must use the keyword CROSS in order to disable the table reordering optimization; INNER JOIN,NATURAL JOIN,JOIN,and other similar combinations work just like a comma join in that the optimizer is free to reorder tables as it sees fit. (Table reordering is also disabled on an outer join,but that is because outer joins are not associative or commutative. Reordering tables in outer joins changes the result.)
6.0 Choosing between multiple indices
Each table in the FROM clause of a query can use at most one index (except when theOR-clause optimizationcomes into play) and sqlite strives to use at least one index on each table. Sometimes,two or more indices might be candidates for use on a single table. For example:
CREATE TABLE ex2(x,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x=5 AND y=6;
For the SELECT statement above,the optimizer can use the ex2i1 index to lookup rows of ex2 that contain x=5 and then test each row against the y=6 term. Or it can use the ex2i2 index to lookup rows of ex2 that contain y=6 then test each of those rows against the x=5 term.
When faced with a choice of two or more indices,sqlite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work.
To help the optimizer get a more accurate estimate of the work involved in using varIoUs indices,the user may optionally run theANALYZEcommand. TheANALYZEcommand scans all indices of database where there might be a choice between two or more indices and gathers statistics on the selectiveness of those indices. The statistics gathered by this scan are stored in special database tables names shows names all begin with "sqlite_stat". The content of these tables is not updated as the database changes so after making significant changes it might be prudent to rerunANALYZE. The results of an ANALYZE command are only available to database connections that are opened after the ANALYZE command completes.
The varIoUssqlite_stat@H_502_20@Ntables contain information on how selective the varIoUs indices are. For example,thesqlite_stat1table might indicate that an equality constraint on column x reduces the search space to 10 rows on average,whereas an equality constraint on column y reduces the search space to 3 rows on average. In that case,sqlite would prefer to use index ex2i2 since that index.
Terms of the WHERE clause can be manually disqualified for use with indices by prepending a unary+operator to the column name. The unary+is a no-op and will not slow down the evaluation of the test specified by the term. But it will prevent the term from constraining an index. So,in the example above,if the query were rewritten as:
SELECT z FROM ex2 WHERE +x=5 AND y=6;
The+operator on thexcolumn will prevent that term from constraining an index. This would force the use of the ex2i2 index.
Note that the unary+operator also removestype affinityfrom an expression,and in some cases this can cause subtle changes in the meaning of an expression. In the example above,if columnxhasTEXT affinitythen the comparison "x=5" will be done as text. But the+operator removes the affinity. So the comparison "+x=5" will compare the text in columnxwith the numeric value 5 and will always be false.
6.1 Range Queries
Consider a slightly different scenario:
CREATE TABLE ex2(x,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
Further suppose that column x contains values spread out between 0 and 1,000,000 and column y contains values that span between 0 and 1,000. In that scenario,the range constraint on column x should reduce the search space by a factor of 10,000 whereas the range constraint on column y should reduce the search space by a factor of only 10. So the ex2i1 index should be preferred.
sqlite will make this determination,but only if it has been compiled withSQLITE_ENABLE_STAT2. TheSQLITE_ENABLE_STAT2option causes theANALYZEcommand to collect a histogram of column content in thesqlite_stat2table and to use this histogram to make a better guess at the best query to use for range constraints such as the above.
The histogram data is only useful if the right-hand side of the constraint is a simple compile-time constant orparameterand not an expression.
Another limitation of the histogram data is that it only applies to the left-most column on an index. Consider this scenario:
CREATE TABLE ex3(w,x,z); CREATE INDEX ex3i1 ON ex2(w,x); CREATE INDEX ex3i2 ON ex2(w,y); SELECT z FROM ex3 WHERE w=5 AND x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
Here the inequalities are on columns x and y which are not the left-most index columns. Hence,the histogram data which is collected no left-most column of indices is useless in helping to choose between the range constraints on columns x and y.
7.0 Avoidance of table lookups
When doing an indexed lookup of a row,the usual procedure is to do a binary search on the index to find the index entry,then extract therowidfrom the index and use thatrowidto do a binary search on the original table. Thus a typical indexed lookup involves two binary searches. If,however,all columns that were to be fetched from the table are already available in the index itself,sqlite will use the values contained in the index and will never look up the original table row. This saves one binary search for each row and can make many queries run twice as fast.
8.0 ORDER BY optimizations
sqlite attempts to use an index to satisfy the ORDER BY clause of a query when possible. When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause,sqlite does the same work analysis described above and chooses the index that it believes will result in the fastest answer.
9.0 Subquery flattening
When a subquery occurs in the FROM clause of a SELECT,the simplest behavior is to evaluate the subquery into a transient table,then run the outer SELECT against the transient table. But such a plan can be suboptimal since the transient table will not have any indices and the outer query (which is likely a join) will be forced to do a full table scan on the transient table.
To overcome this problem,sqlite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. For example:
SELECT a FROM (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5
Would be rewritten using query flattening as:
SELECT x+y AS a FROM t1 WHERE z<100 AND a>5
There is a long list of conditions that must all be met in order for query flattening to occur.
The casual reader is not expected to understand or remember any part of the list above. The point of this list is to demonstrate that the decision of whether or not to flatten a query is complex.
Query flattening is an important optimization when views are used as each use of a view is translated into a subquery.
10.0 The MIN/MAX optimization
Queries of the following forms will be optimized to run in logarithmic time assuming appropriate indices exist:
SELECT MIN(x) FROM table; SELECT MAX(x) FROM table;
In order for these optimizations to occur,they must appear in exactly the form shown above - changing only the name of the table and column. It is not permissible to add a WHERE clause or do any arithmetic on the result. The result set must contain a single column. The column in the MIN or MAX function must be an indexed column.
11.0 Automatic Indices
When no indices are available to aid the evaluation of a query,sqlite might create an automatic index that lasts only for the duration of a single sql statement and use that index to help boost the query performance. Since the cost of constructing the automatic index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N),an automatic index will only be created if sqlite expects that the lookup will be run more than logN times during the course of the sql statement. Consider an example:
CREATE TABLE t1(a,b); CREATE TABLE t2(c,d); -- Insert many rows into both t1 and t2 SELECT * FROM t1,t2 WHERE a=c;
In the query above,if both t1 and t2 have approximately N rows,then without any indices the query will require O(N*N) time. On the other hand,creating an index on table t2 requires O(NlogN) time and then using that index to evaluate the query requires an additional O(NlogN) time. In the absence ofANALYZEinformation,sqlite guesses that N is one million and hence it believes that constructing the automatic index will be the cheaper approach.
An automatic index might also be used for a subquery:
CREATE TABLE t1(a,d); -- Insert many rows into both t1 and t2 SELECT a,(SELECT d FROM t2 WHERE c=b) FROM t1;
In this example,the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows,sqlite expects that the subquery will run N times,and hence it will believe it is faster to construct an automatic,transient index on t2 first and then using that index to satisfy the N instances of the subquery.
The automatic indexing capability can be disabled at run-time using theautomatic_index pragmaand can be omitted from the build at compile-time using theSQLITE_OMIT_AUTOMATIC_INDEXcompile-time option.
原文链接:https://www.f2er.com/sqlite/202842.html