This is not a question, but a short tale on what's happened in the past few days for me.
We have a ton of data where I work (several terabytes) and I work with a lot of it. For a certain report, I link into several Access DBs of varying sizes (500KB to around 500MB) and pull the data I want into a few different tables. Once the queries were written to make the data for this report, they worked, but the entire process took around five and a half hours. We have other processes that take this long, so I didn't think too much of it.
One of the processes was doing an update the main detail table, and that thing turned out to be the culprit. Without that update process, the whole thing took maybe 10 minutes, so something must be wrong with the update, right? Well, it's updating one field (out of a table of six fields) and the remaining five fields are inner joined to the source, so that shouldn't be it. It turns out to be the indices on the main detail table that were causing such massive overhead.
The way I got around it was to use some quick data definition queries to remove the indices (DROP INDEX), perform the appending, add the indices back (CREATE INDEX), and then do the updating. The whole thing went from 5 1/2 hours to 11 minutes.
If you're having a poorly performing query, I would recommend this method. Generically outlined, it looks like this:
1) Compact and repair.
2) Use SQL to delete all the content from main detail table.
3) Use SQL to remove all indices from the main detail table and whatever other tables you will eventually link to.
4) Use SQL to do whatever appends you need to whatever tables need the data (in my case, the main detail table and a subordinate).
5) Use SQL to add your indices back in.
6) Perform any updates needed (in my case update the main detail table with data from the subordinate table).
I didn't realize how much overhead the indices were adding (the table is over two million records), and doing a simple append with all those indices in place was causing a timing nightmare. So long as your subordinate and main tables share the same indices (five field index in my case), the update flies.
I'm just sharing because this is one of those things that took me (with help from my boss) two days to figure out, and it's a huge time-saver. Apparently creating the indices on existing data is 100x faster than creating them as data is appended. I didn't know that.
We have a ton of data where I work (several terabytes) and I work with a lot of it. For a certain report, I link into several Access DBs of varying sizes (500KB to around 500MB) and pull the data I want into a few different tables. Once the queries were written to make the data for this report, they worked, but the entire process took around five and a half hours. We have other processes that take this long, so I didn't think too much of it.
One of the processes was doing an update the main detail table, and that thing turned out to be the culprit. Without that update process, the whole thing took maybe 10 minutes, so something must be wrong with the update, right? Well, it's updating one field (out of a table of six fields) and the remaining five fields are inner joined to the source, so that shouldn't be it. It turns out to be the indices on the main detail table that were causing such massive overhead.
The way I got around it was to use some quick data definition queries to remove the indices (DROP INDEX), perform the appending, add the indices back (CREATE INDEX), and then do the updating. The whole thing went from 5 1/2 hours to 11 minutes.
If you're having a poorly performing query, I would recommend this method. Generically outlined, it looks like this:
1) Compact and repair.
2) Use SQL to delete all the content from main detail table.
3) Use SQL to remove all indices from the main detail table and whatever other tables you will eventually link to.
4) Use SQL to do whatever appends you need to whatever tables need the data (in my case, the main detail table and a subordinate).
5) Use SQL to add your indices back in.
6) Perform any updates needed (in my case update the main detail table with data from the subordinate table).
I didn't realize how much overhead the indices were adding (the table is over two million records), and doing a simple append with all those indices in place was causing a timing nightmare. So long as your subordinate and main tables share the same indices (five field index in my case), the update flies.
I'm just sharing because this is one of those things that took me (with help from my boss) two days to figure out, and it's a huge time-saver. Apparently creating the indices on existing data is 100x faster than creating them as data is appended. I didn't know that.