Crazy idea for you
I may have overstated the case a bit and I don't want to mislead you. If you must use a query to perform an append or update, use it. However, there are some practices to avoid if you have VERY LARGE updates to perform on a regular basis.
First, nesting a query within a query (perhaps within a third query) causes each query to create a list of record pointers that must temporarily be stored somewhere and which can contribute to bloat. This should not be that large because the lists aren't that large. Nor am I sure that the lists always get stored in the DB if you have enough physical memory. This is not always clear with Access.
Second, staging data to a secondary (temporary) table inside your db creates an actual table that must eventually be released. And this is likely to be proportional in size to the amount of data to be updated/appended. Further, Access flat out sucks at space reclamation while it is running. It just can't reclaim space "on the fly" like some systems can. This is the actual cause of bloat - the inability to do dynamic space reclamation.
Third, if you use a secondary table and it has indexes on it, the indexes must also be released eventually. Again, this will be proportional in size to the amount of data involved. Yet indexes are useful. Maybe in some cases, unavoidable.
What I am saying is to reduce the amount of temporary table use to an absolute minimum if it is in your primary DB. Minimize the number of indexes on the temp table. Down to a prime key and nothing else, perhaps. Try to avoid nesting queries of large items as part of the append, make table, or update queries that you run.
Where it makes sense and where you have the expertise to do so, using VBA to read a file and do direct recordset operations (particularly if it is an append operation but also for some updates) is the most space-efficient method. Like all alternatives, this also involves a trade-off: complexity to code vs. minimization of space requirements.
BUT you can also open a second database file (.mdb) via linking or other methods. If you put your temp tables in the second database, you can perform some maintenance operations on that second DB while users are searching the primary one. An erase query to clean up tables in the secondary DB performed from workstation A won't affect the speed of a select query performed by workstation B if the select query doesn't require an operation in that secondary DB.
Further, you can even keep the secondary DB on another disk or in another folder. Now here, I'm going to offer a thought for experimentation that might be more than you want to do, but maybe it is workable. Think about it before you try it 'cause it isn't trivial.
Right now, you have some users sharing the BE database. But technically, this works because their FE databases point to the BE database for the tables they use.
If you had a custom FE that WASN'T a true copy of the FE that the others use, you could link some tables inside your private copy of the FE that is not in the FE that the others share. Your temporary update table definitions could be there and nowhere else. In that case, if the temp DB is local to the same drive where the special FE resides, an update operation will be faster than it would be if everything is on the BE DB only, and the temp table erasure would be totally invisible to other users. Nor would it involve file or table locks on the BE DB since the temp tables aren't there.
The cost of this is, in essence, double-entry bookkeeping. Your "special" FE has to be updated separately every time your primary FE gets updated. Now, you can point to the primary FE from the special FE and do the appropriate import operations. But it is just that you added a layer of complexity to do this. On the other hand, if it works to reduce the bloat factor of your DB, maybe it is a good idea. That will have to be your call.