as i say i think its just the sheer size of the data files you are handling
access needs to store the data - it also needs to store any temporary files, although you can get the space back from these by compacting. so importing a big file to a new table, and then copying that table into another table will increase the dbs size - if these files are very large, then the dbs will increase in size to a very large extent
it does not indicate any errors. the only issue is that
a) your dbs may exceed the maximium size allowed by Jet
b) you may not be able to transport it so easily via a memory stick, say
depending on what the files ARE that you are importing, you can actually treat them as an external linked data source, which might reduce the size of your dbs - but may or may not appropriate
assuming that you do need to import them, then access has its own paradigm for handling files - the new data is not just concatenated as additional text. - It is all managed by access, which also needs to maintain indexes etc for many of your fields. I am sure access doesnt even store text ifelds necessarily as identical text strings - since strings are stored in variable lengths, access MUST surely use some algortihm to split strings into chunks (probably a linked list of some sort), so that it can easily compute the location of any string type within the phyiscal .mdb file, so there is certainly an overhead for the dbs manager (ie access) over and above the size of the data.
what has already been pointed out is that a table with 137 fields is unusually wide - and you may be able to structure this in a better way. The fact that you have been able to treat this as a file of 137 fields,and another file of 120 fields definitley indicates something is amiss. if the file was normalised, then you would not be able to achieve any reduction in this way