For me the most compelling argument is this:
A single character, depending on which character set we may use could require between one to three byte to store this character. But for simplicity, let's assume every character used only one byte per.
Therefore, a text string of "Poor" would require 4 bytes to store. But if we were to use a ID in its place, we could choose to use a byte and thus store a single byte for every time we need to mark a record as "Poor" and define "Poor" only once in the database.
So instead of having 4 bytes * N number of records = M total bytes consumed, we could do this, 1 byte * N number of records + 4 bytes for "Poor" = M total bytes consumed.
But it's not all about storage space. Which will be faster to compare, a string that could be longer than the processor's total number of registers or an integer that will fit the number of registers or be less? Therefore, by using integers, the processor can grind through much more records than it had to compare the string. Now, even if we had 4 byte string which would be equivalent to Long which is also 4 bytes, Long is still faster than string. Why? Because integers doesn't have a collation and the processor must take an extra step to check the collation... Should "Smith" = "smith"? "Mueller" = "Müller"? To avoid the extra step, you would have to explicitly compare them as binary. While you could specify so via StrComp function, I'm not aware of a character set in Access that enables you to compare by binary by default. Those questions are totally irrelevant for integers.
So, in terms of storage space and processing expenses, surrogate key composing of integers will almost always win out compared to text.