Breaking a Large Table Down

andymin

New member
Local time
Today, 21:44
Joined
Jun 22, 2010
Messages
1
Hi,


I wonder if anyone could help with a problem that I just cannot get my head round. I have a database that I have implemented to a group of staff which collects data on the work that they process throughout the day. Users must login to the database using credentials assigned to them so that they can only enter details on their own behalf. A table collects the following data:
  • Unique UserID (automatically inserted from login)
  • Date Work Entered
  • Amount of work processed for several categories
I have forms set up so that the user logs in and their UserID is stored as a Global Variable. Users are only be able to enter data once for every date. The table I have collects data for all users and has a tendancy to become very large. I would like to break the table down into smaller tables so that they are related and so that all the data is not stored in the same table as I believe that this isn't the best way to do this.

I hope I've explained this well enough but I've attached a copy of the database in case anyone can help!

Thanks in advance.
 

Attachments

When you say you want to break your table down, do you mean Reduce the Nu of Records? or Reduce the Num of Fields?

If your issue is too many records in the table ? how many?? How many million records are there?

When reporting / analyzing the data you will Query the records you want so then you only have the selected few.

If the Number of Fields is an issue because different Tasks have different headings then you could break this up into Tables for the main tasks but you would really want quite a few fields to be redundant for this to be an issue.

Each "Form" only needs to show the Text Box Controls bound to the relevant fields and the fact that some fields are Null may not be a problem - except in calculations but this isn't a mentioned problem.

Our database instantly returns data when processing tables with 100,000's records.

Indexing may be something to review if speed is an issue.
 
Hi

I have a database that has a large table for collection of data which can get very large and thus has an impact. The way I have dealt with this is as follows:

Create a table that is an exact replica of you main table, but without any data in it for example:

tblImportData [Main table]
tblImportDataHistory [Copy of main table]

The tblImportData table contains current data, that is information that has been imported yesterday.

the tblImportDataHistory table contains all data that is older than yesterday.

Each day I ran a process that imports data into my tblImportData table, but before that data is imported, I have an append query that appends the data to the tblImportDataHistory table and a delete query that deletes the contents of that table before importing the fresh data.

This keeps the tblImportData table to a reasonable size for the kind of working we do.

Now I have a criteria for my tblImportDataHistory table whereby data that is older than 6 months is no longer required to be stored in the database but we may still need to refer to that data, and so I have the following actions:

Another query that appends data to a table called tblExportDataHistory for data that is older than 6 months. A delete query that deletes data from the tblImportDataHistory table that is older than 6 months.

I than have code written that exports the contents of the tblExportDataHistory table in fixed width format to a text file that is named as follows ExportDataHistory_CurrentDate. CurrentDate is in this format ddmmyy [230610 (UK style)], so everytime data older than 6 months is exported a text file is created with the aforementtion name plus the date that the data was exported from my database.

This way if I need to refer to data that is no longer in the database, I can eailier retrieve it.

This method works for me and may not be what you are looking for, but maybe it might help you work out what you need to do to achieve your desired goal

John
 
Large means different things to different people and many new developers use the term for databases that more experienced developers consider trivial.

Some go to great lengths to move records out to other tables or databases. Normalisation and indexing are actually far more important when dealing with large numbers of records.

I have one table with over six million records and it still responds without delay. I am letting it grow just to see what happens.
 

Users who are viewing this thread

Back
Top Bottom