Umm, I don't think that's a good idea. I don't even think it's possible to preserve a snapshot of a table related index if you delete all records and re_import the data. If you drop the table, it drops all related indexes. If you delete all records but keep the table, the indexes get updated to basically blank, and the deleted records are not physically removed, rather they're flagged as logically deleted. Your best bet is to drop the table(s), recreate them, load all the data with the dup records removed, or include the dups, create non_unique indexes, then remove the dups with a delete query.
If you drop the tables and recreate them, which I strongly recommend, the joins are automatically removed and you have to re-define them. For best performance, especially with large tables, it's best to start out with bran spanking new tables, load all the data, then create all the indexes, joins, constraints, perms, etc. etc.
You should also create a clustered index on the primary key so that it physically reorders the table data in the same order as the cluster index. You will also want to update the tables statistics so the cost based query optimizer can build the best query execution plans.