I currently have Access use a pull through query to grab about 60 columns and about a million rows of data from our data warehouse via ODBC connection. Then I basically use Access as a pivot cache for an Excel doc to make everything prettier and easier to use for my customers. Unfortunately, they "need" all of this subscriber data to make sales calls and run other reports, so I can't really limit it any more. What I CAN do to make the pivot tables cache the data faster is break up the million rows into smaller table chunks by Associate Director, and get about 100k instead as I've been getting complaints that the reporting is too slow (which it is). My thought was to have Access run the same query that builds the 1M row table, but then in the where clause, it would use each AD's name from a list (table) and chunk out the 100k tables instead. The net data size is the same, but with the smaller pivot caches, it should run much faster. Everything is automated using the old method, but of course it takes hours and I was looking for help on how to accomplish using vba or whatever to try my new idea.
What's the better way to do this?