Access9001
Registered User.
- Local time
- Yesterday, 16:28
- Joined
- Feb 18, 2010
- Messages
- 268
I have a really, really massive table (over 160 million rows) -- but I want to make some indexes to optimize this thing.
I have three columns, User ID (number/int), Badge ID (number/int), and WinDate (Date/Time)
There may be duplicate values in each column, but each row is unique in itself. For instance:
1, 1, 5/24/2011
1, 4, 6/7/2011
1, 2, 4/4/2010
2, 2, 5/25/2011
2, 3, 6/2/2011
2, 4, 8/8/2011
I basically want to be able to statistically process this table so I'll be joining on User ID's in some queries, Badge ID's in others, and grouping things into date groups for others, etc.
What kind of indexes do I need to make and how? (clustered, nonclustered, xml, spatial, etc)
I have three columns, User ID (number/int), Badge ID (number/int), and WinDate (Date/Time)
There may be duplicate values in each column, but each row is unique in itself. For instance:
1, 1, 5/24/2011
1, 4, 6/7/2011
1, 2, 4/4/2010
2, 2, 5/25/2011
2, 3, 6/2/2011
2, 4, 8/8/2011
I basically want to be able to statistically process this table so I'll be joining on User ID's in some queries, Badge ID's in others, and grouping things into date groups for others, etc.
What kind of indexes do I need to make and how? (clustered, nonclustered, xml, spatial, etc)