In the meantime, I'm having problems with time it takes for forms to load / queries to run, ect.
A couple of observations:
If the DB is remotely located, your load/update/query time depends very heavily on the speed of your local area network's backbone. Slow net = slow Access. End of story.
Things people do to help with DB times come in two or three major categories.
1. Hardware upgrade at the point of system slowness. If the DB is on a slow file server, put it on a faster file server. If the network leading to the DB is slow, see if THAT can be made faster. If your own system is old and cranky, get a faster system.
2. Simple file maintenance. DBs tend to get fragmented from normal use. This is a side-effect of the algorithms used and is totally unavoidable over the long term. So to counteract problems from fragmentation, you defrag your disk, repair your DB, and compact your DB on a regular basis.
3. Forms as individual DB items, I can't help you... but if the forms and queries are slow, a couple of options come to mind. First and foremost, any table that is used commonly in forms and/or queries should have an index on the field on which most actions are based. More than one index is both legal and advisable, within limits. I.e. don't put an index on EVERY FIELD. But find the most commonly used fields, usually no more than three or four, and put indexes on them. It would be FAR better if one candidate field was unique so that you could set up the field in the index to have non-duplicate status. Then it could become your prime key.
4. For any form/report/query, performance of the DB is strongly influenced by the level of normalization of the DB.