I have 31 tables which as they are updated are appended to the Master table (bad name should call holding but for this example I called it Master).
The Master Table could have upwards of 5000 records depending on when it is updated. I need to remove the duplicates before moving the data to the Output Table.
I can automatically import the data and get it to append the data that has not been appended since last use to the Master Table.
I cannot figure how to efficiently remove the duplicate records.
In the sample database attached the key fields are Spec_number, spec_result, and date_authorised.
if the spec_number and spec_result is the same I need to delete any record (row) which is older that the newest date_authorised.
However if the date_authorised are the same but the spec_result is different then all different spec_results need to be kept.
Basically spec_Number and spec_result uniquely identify the record after which I only need the latest version (newest date_authorised).
The Master Table could have upwards of 5000 records depending on when it is updated. I need to remove the duplicates before moving the data to the Output Table.
I can automatically import the data and get it to append the data that has not been appended since last use to the Master Table.
I cannot figure how to efficiently remove the duplicate records.
In the sample database attached the key fields are Spec_number, spec_result, and date_authorised.
if the spec_number and spec_result is the same I need to delete any record (row) which is older that the newest date_authorised.
However if the date_authorised are the same but the spec_result is different then all different spec_results need to be kept.
Basically spec_Number and spec_result uniquely identify the record after which I only need the latest version (newest date_authorised).