SQLite3查询优化join vs subselect

前端之家收集整理的这篇文章主要介绍了SQLite3查询优化join vs subselect前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我试图找出最好的方法,(在这种情况下可能无关紧要)根据标志的存在和另一个表的行中的关系ID来查找一个表的行.

这里是模式:

  1. CREATE TABLE files (
  2. id INTEGER PRIMARY KEY,dirty INTEGER NOT NULL);
  3.  
  4. CREATE TABLE resume_points (
  5. id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,scan_file_id INTEGER NOT NULL );

我正在使用sqlite3

那里的文件表会很大,一般10K-5M行.
resume_points将小于10K,只有1-2个不同的scan_file_id

所以我的第一个想法是:

  1. select distinct files.* from resume_points inner join files
  2. on resume_points.scan_file_id=files.id where files.dirty = 1;

一个同事建议把这个联合:

  1. select distinct files.* from files inner join resume_points
  2. on files.id=resume_points.scan_file_id where files.dirty = 1;

那么我以为,因为我们知道不同的scan_file_id的数量会很小,也许一个子选择是最佳的(在这个罕见的情况下):

  1. select * from files where id in (select distinct scan_file_id from resume_points);

解释输出分别具有以下行:42,42和48.

TL; DR:最好的查询和索引是:
  1. create index uniqueFiles on resume_points (scan_file_id);
  2. select * from (select distinct scan_file_id from resume_points) d join files on d.scan_file_id = files.id and files.dirty = 1;

由于我通常使用sql Server,起初我认为查询优化器肯定会找到这样一个简单查询的最佳执行计划,无论你编写这些等效的sql语句是什么.所以我下载了sqlite,并开始玩耍.令我吃惊的是,表现有很大差异.

以下是设置代码

  1. CREATE TABLE files (
  2. id INTEGER PRIMARY KEY autoincrement,dirty INTEGER NOT NULL);
  3.  
  4. CREATE TABLE resume_points (
  5. id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,scan_file_id INTEGER NOT NULL );
  6.  
  7. insert into files (dirty) values (0);
  8. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  9. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  10. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  11. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  12. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  13. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  14. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  15. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  16. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  17. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  18. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  19. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  20. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  21. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  22. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  23. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  24. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  25. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  26. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  27. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  28. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  29. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  30. insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
  31.  
  32. insert into resume_points (scan_file_id) select (select abs(random() % 8000000)) from files limit 5000;
  33.  
  34. insert into resume_points (scan_file_id) select (select abs(random() % 8000000)) from files limit 5000;

我考虑了两个指标:

  1. create index dirtyFiles on files (dirty,id);
  2. create index uniqueFiles on resume_points (scan_file_id);
  3. create index fileLookup on files (id);

以下是我尝试的查询和i5笔记本电脑的执行时间.数据库文件大小只有大约200MB,因为它没有任何其他数据.

  1. select distinct files.* from resume_points inner join files on resume_points.scan_file_id=files.id where files.dirty = 1;
  2. 4.3 - 4.5ms with and without index
  3.  
  4. select distinct files.* from files inner join resume_points on files.id=resume_points.scan_file_id where files.dirty = 1;
  5. 4.4 - 4.7ms with and without index
  6.  
  7. select * from (select distinct scan_file_id from resume_points) d join files on d.scan_file_id = files.id and files.dirty = 1;
  8. 2.0 - 2.5ms with uniqueFiles
  9. 2.6-2.9ms without uniqueFiles
  10.  
  11. select * from files where id in (select distinct scan_file_id from resume_points) and dirty = 1;
  12. 2.1 - 2.5ms with uniqueFiles
  13. 2.6-3ms without uniqueFiles
  14.  
  15. SELECT f.* FROM resume_points rp INNER JOIN files f on rp.scan_file_id = f.id
  16. WHERE f.dirty = 1 GROUP BY f.id
  17. 4500 - 6190 ms with uniqueFiles
  18. 8.8-9.5 ms without uniqueFiles
  19. 14000 ms with uniqueFiles and fileLookup
  20.  
  21. select * from files where exists (
  22. select * from resume_points where files.id = resume_points.scan_file_id) and dirty = 1;
  23. 8400 ms with uniqueFiles
  24. 7400 ms without uniqueFiles

看起来sqlite的查询优化器根本不是很先进.最好的查询首先将resume_points减少到少量行(两个在测试用例中,OP表示将是1-2),然后查找文件以查看它是否脏. dirtyFiles索引对于任何文件没有太大的区别.我认为这可能是因为数据在测试表中排列的方式.它可能会对生产表产生影响.但是,差异不会太大,因为会少于几次查找. uniqueFiles确实有所作为,因为它可以将10000行resume_points减少到2行,而不扫描大部分. fileLookup确实做了一些查询更快,但还不足以显着改变结果.值得注意的是它使得组织非常缓慢.总之,尽早减少结果集,使之产生最大的区别.

猜你在找的Sqlite相关文章