I remember when a 16kb machine was considered huge and we wondered what we would do with 32 kb. Then times changed. Before long you started to consider memory models based on different memory management schemes. We went through the Tiny, Small, Medium, Compact, Large, and Huge models as memory mapping technology improved. You are now limited to memory amounts governed by the size of your CPU's address bus. For instance, I have no trouble with hosting 16 Gb of RAM on my box. However, for some server-class machines, 256 Gb of RAM is not out of the question.
It's always an interesting back and forth between the "space is cheap" on the one hand, and "but data volume is increasing exponentially" on the other hand. For a few years, "space is cheap" ruled the discussion. It was getting so cheap that SQL Server DBA's had a hard time persuading developers to right-size their column types, nobody cared. Then big data came along, where the massive quantities of information being collected basically made "space is cheap" meet its match, and once again it matters depending on the big-ness of your data. Every principle in moderation. Except moderation? LOL. Didn't someone's signature once say that?