There are a few threads here on the subject of optimizing performance. You might try keyword "Performance" to search this forum.
In general, Access is not network-friendly because to Access, a BE on a server is on a FILE server. Access has to pull the file into your local computer to get what it wants.
I'll be honest, 60,000-100,000 records isn't that many for a DB. We have a few folks who report using twice that. But... having said that, I'll add that the specifics of your tables can have a lot to do with speed.
For instance, if the tables have many fields AND they have many records, too, you MIGHT consider a performance enhancement of sorts. It is truly apostasy to some, but a one-to-one table mapping (i.e. SPLIT your big tables) might be productive. What you would do is put the searchable fields in a table and the supplemental fields in another table. Then, build a query that searches your search-field tables. Then build a query that JOINS the supplemental data table to the search-query on the resultant prime keys.
You would do this because of BUFFERS. Access reads records into memory buffers. The longer the record, the fewer records per buffer. The fewer records per buffer, the more disk reads you must do. The more disk reads you must do, the slower your DB will get. So... make the initial query that searches for specific criteria have a smaller record to search, thus getting more data per disk read. THEN take advantage of the one-to-one relationship to grab only the selected record details. Access will do this right if you isolate the search query from the query that gets you the final data set. Stated another way, by forcing Access to evaluate the search query first, you reduce the size of the (virtual) hypergeometric space you are searching. (Yeah, that usually throws folks for a loop - but it is true.)
This is one of the few cases where a split of a single table to a pair of tables having a one-to-one relationship makes sense, and only for performance reasons.
You will also find comments in some threads, if you search far enough, to assure that you have proper indexes on all fields you want to search and NO OTHER FIELDS should have indexes. (Indices?) This is also a good idea.
The only other option, if you find that no amount of tweaking helps, is to invest in SQL server and put it on a "hot" box. For example, HP Pavilions have dual-core or better CPUs, 2.8 - 3.2 GHz CPU speed, and can handle SATA or other fast disks spinnning at 7,200-10,000 RPM. Dell and IBM have comparable models. Nor are we talking incredibly expensive machines. An HP such as I described was available two years ago or more from one of the Sam Walton stores for about $1700 and change WITH a big flat-panel screen and a pretty good video card.
Make one of those your server for SQL server and watch it fly. Or get a fiber-channel disk with even better transfer rates if you want your system to scream. And if you are going to make that investment, make the data disk RAID-1, since either member of a RAID pair can respond to an I/O request. This almost doubles your query performance for a well-designed database.
Nobody said it was going to be cheap to speed up your DB, least of all me. But this IS a case where "throwing money at a problem" can help it at least somewhat.