Banana, I'll give it a rip.
Pound for pound, every time you normalize a table, you save space. How? Because if you needed to store several items in the master record or one pointer to an item, AND that item has significant potential of multiple references, then you save the difference in space between the pointer and the set of all item descriptive data for the second and subsequent references to the same item.
But that raises the question: How often does an item get repeated in order to make the lookup worth the effort? See, that's where YOU come in. I don't know your data set. You do. If you split items into main field plus lookup field, but it turns out that the lookup field has the same
cardinality as its parent table, then you might as well not have performed the split.
CARDINALITY:
n - Expected average number of records returned by a query with a single non-wild-card value in its selection criteria. For unique keys, cardinality is 1. For yes/no cases, cardinality of an N-record table is N/2 (approximately).
OK, past that point, what else does normalization get you? Where the details ARE repeated, the table containing the pointer is smaller. Smaller tables are faster to manage if you aren't pulling in the related record at the same time. Has to do with number of records that fit in a disk buffer. Shorter records = more records per buffer.
Your question: When does it make sense to store a calculated field?
Case 1: Where you have something that is not normally captured, it might be a good idea to store a computation. For example, a store with a "Our prices are lower or else we will beat anyone else's advertised price by 10%" offer. They offer ordinary discounts - but on the day that their competitor decides to do them dirt, they have to compute a SPOT discount - and STORE it - because they don't have that particular discount in their discount tables. So they store discount code "Beat competitor's price by 10%" and then enter the price. The COMPUTER stores the computed discount which cannot be recovered from their own records because the discount rate wasn't their own discount.
Case 2: Inventory system where the quantity on hand is the sum of all positive and negative transactions on a given item... but the database is getting big and bloated so it is time to archive. You pick a date, compute the sum of all transactions up to that date, and create a record with a computed stock level as of that moment. Then you archive all previous records - and you now have used a computed field to show stock level as of the date of archiving.
I need to implement several workarounds to make the county field act as if it's part of same table without confusing my users
Forms don't usually know or care whether your recordsource is a query or a table. Use queries that include JOINs to do translations. Then it LOOKS like the translation is a true part of the dataset when in fact it is there only due to the presence of a query.
As to government storage of records, you want those records stored in the most economical terms, which usually still implies normalization. I can tell you that the US Dept. of Defense has a 30-year archives rule and we are doing our best to live with it. Sometimes you do take what appears to be a radical shortcut to meet government requirements. But it is still best to retain your data in normalized form for as long as you can.
Strict normalization can be a real pain in the patootie. I'll let you in on a little secret. For Access, if you can get to strict 3rd normal form, you are ahead of the game. You COULD get to 4th or 5th normal forms - they do exist - and there are specialized normalizations as well. But in general, stop at 3rd and be content that you've done a pretty good job.
Question regarding speed vs. space... You don't care about speed in most computers these days. They have FAR outstripped the disks. Let's say that you are running a 1 GHz computer. (Yeah, I know... yesterday's news...) It's an Intel box for which instructions take 1-3 clock cycles. So say 2 cycles average. (That's wrong, but use the number for argument's sake.) So 2 cycles/instruction on a 1 GHz machine is 0.5 Giga-instructions per second = 500 MILLION INSTRUCTIONS. A disk seek takes maybe 5-15 milliseconds because a mechanical part is involved. So... say 10 msec. Do the math and you find that one seek is worth 5 million instructions. Even in a high-level emulated language, that a LOT of lines of code between seeks. So if you have to compute something on the fly, do it. It will take you a LONG time to break even between storing something and recomputing it each time.
That byte you save by recomputing adds up slowly. If it is the ONLY difference between the computed record and just storing the value once, it might not matter. When the computed value is four or eight bytes long, that adds up more rapidly.
This is kind of rambling. I'll let you digest this and if you have any other "tangential" questions, ask them. Also, if I missed answering a question or if my answer seemed to be strangely unresponsive or unrelated, chalk it up to digestive languor. Ask again, perhaps being more direct in what you wanted to know.