I'mmm Puzzeled: Could Use Some Advice ...Thanks

  • Thread starter Thread starter sunvalleyma
  • Start date Start date
S

sunvalleyma

Guest
I have built a tested database using access 2003 and have designed a ASP, VBScript dynamic website as the frontend. I have been impressed thus far, until I got to the magic number of storing 60,000 records using one table inside one database file. After I compacted the database, now I'm puzzeled becuase I'm getting slow performance when I'm searching all the 60,000 records at one time.

How could I perhaps design either the database structure or the asp code so that when searhing thru all 60,000 records will be less than 5 seconds to view my results via the website?

Right now I am viewing all 60,000 records at an estimated time of 25 seconds; which seem to bit slow to me, is this good or not so good?

I thought access could allow descent speeds of searching much more than just 60,000 records.

BASICALLY! I Need to store Lots of records and perform fast website search results.

I wonder who's out there in the world using access and searching more than 60,000 records at one time and is getting their search results in less than 5 seconds...please response I would like to know what you think.

Thanks
 
Pat is right. In the interim, though, I would be very careful regarding the keys on which you are searching. Bad keys will kill you every time.

1. Bad cardinality = key is not unique AND list of possible returns grows to be a very big number. See about improving (decreasing) number of returned records per key.

2. Key buffer issues = if the key is very long (too many characters), it might not have been the best choice. Some other structure might make the search faster. Shorter keys make faster searches.

How does this work? Keys must be read into a fixed-size buffer, searched one buffer-load at a time. Shorter keys = more keys per buffer = fewer disk reads per search.

3. Every now and then, give yourself a maintenance day.
3.a Remove and reassert each key in turn.
3.b Repair database
3.c Compact database

Why? Removes clutter, forces system to reorder records and indexes, get rid of as much inefficiency as possible due to, in essence, "garbage collection" problems with MS Office memory scratchpad algorithms. (Known since days of MS DOS, still not fixed...)

4. If records can be "retired" (i.e. you want to remember them but not search them), consider building an archive table. Then archive during your maintenance period. Do archiving BEFORE index, repair, compress.

How to archive:

4.a Include a flag, perhaps as simple as Yes/No field = "ArchiveNow" or any convenient other name.
4.b Using UPDATE query, set ArchiveNow flag for any record you think you can safely remove
4.c Using APPEND query, select main records with ArchiveNow = YES to be copied to archive table.
4.d Using DELETE query, select main records with ArchiveNow = YES to be deleted form main table.

Why does archiving help? Reduces the number of records to search = faster search.
 
If you want a more capable database but have zero budget, both MS SQL and Oracle have free versions available.

I've used MS SQL Express for a table with ~600,000 rows, and as long as your search doesn't return too many of them, the web page pops up almost instantly.

If you're actually trying to view all the records at once, then the database you're using doesn't matter - the slowdown is the web page displaying all those rows!
 

Users who are viewing this thread

Back
Top Bottom