How do I set up the table to minimize redundant data?
First, you must thoroughly understand the concept of NORMALIZATION. The "minimize redundant data" issue is exactly what led to folks researching ways to normalize tables. You can look up normalization in the Access Help files or on the Internet. If you do a Google-search on "Database Normalization" you will probably get a thousand-plus references that are scholarly in nature and about a bazillion that are part of advertising for a given product, part of a college course, or perhaps appear in a library of article taken from the Journal of the American Computing Society. JACS is a WONDERFUL reference source but you need to be near a university to find the older articles.
Here is your problem: If you are importing from Excel, that dataset is NOT normalized. It is what we call "FLAT" - i.e. neither normalized nor relieved of its redundancies. When you import flat files using the Import Wizards, you get flat tables. Always. Never any exceptions (that I recall). So the technically correct answer is that in a single step, you CANNOT reasonably eliminate redundant data.
But what you CAN do is import the data to a temporary table as a framework for subsequent queries that would eliminate this redundancy in several (and I
DO mean several) steps.
Your earlier response suggests that you are so new to Access that the answer FoFa gave you merely confused you. This makes it difficult to help you without writing something for you. I don't have that much time. What I can do is suggest that you start reading about normalization.
You will need to import your data to a holding table. You must build your DB with normalized target (final) tables. Then you can write a series of queries to extract PARTS of the imported data and append the parts to the target tables. Once you have identified the final tables and the columns of your spreadsheet that contribute to each, you can write the queries. The catch will be building the relationships.
I'll summarize something you will learn when you start reading up on normalization: Tables that are normalized often have parent-child relationships (a.k.a. one-to-many) in which one record in a parent table corresponds to many records in a child table. Or, if you prefer other nomenclature: What I call a "parent" table, you might call an "independent" table. What I call a "child" table, you might call a "dependent" table. Using my family nomenclature, it is not unheard of to have multi-generational tables in a hierarchy 3-5 levels deep. In that case, you have parent, child, grandchild, great-grandchild, and great-great-grandchild tables.
The series of queries has to populate the highest level of the hierarchy first - in my model, the parent. You can eliminate all duplicates from the parent table. Once the parent has been populated and all duplicates are gone, you can use a special field known as a key. It has to be unique for the parent (or for the independent side of the relationship) but can be duplicated in the child table. You store this key in the child tables to POINT to a parent record. Look up Autonumber fields for suggestions on how to define this key, or you can compute your own key, but in the latter case, KEEP IT SHORT. The shorter the better.
OK, once you populate the parent and have the keys, you can populate the child tables. Where you would have duplicated a long field that is now defined in the parent, you simply place the record key for the corresponding parent entry. You haven't actually ELIMINATED the duplication but you have radically
shortened the key size that will be duplicated.
The same concept applies when you have a grandchild table. Once the parent is populated, populate the child tables. They will ALSO have the potential for a unique key to identify records. Once these keys are defined, you can use ANOTHER query to populate the grandchild tables using the child keys as pointers to the corresponding records in the child tables. This can be carried on to as many levels (within reason) as you might need.
Once you have the keys for each level, you can use JOIN queries to rebuild the records to LOOK LIKE the original input, but they will take up much less space in the final table.
One final thought. If you are doing this for maybe 10-15 Excel records, don't bother converting to Access. You get no significant benefit (except the learning experience) until you start getting hundreds of overall records with dozens of fields for each. A cost-benefit analysis for Excel vs. Access ALWAYS picks Excel first for really small datasets.