Stop users trying to open database when in inconsistent state

tucker61

Registered User.
Local time
Today, 08:27
Joined
Jan 13, 2008
Messages
344
We have a database on our network, that is not split.. We have been experiencing database in a inconsistent state.

Once we get his we cannot kick anyone out of the database and we have to repair it.

Whilst we are repairing other users try to open the database which causes us other issues.

Apart from splitting the db any other advice?
 
Apart from splitting the db any other advice?
They may be other advice, but not better. You should have a front end for each user.

You can have this code in a file on the network, designated to copy a file from the network to a temp local location, and open it:
Code:
Private Sub copyAndOpenMdbFile(FileName As String)
On Error GoTo Err_copyAndOpenMdbFile
    Dim stAppName As String
    Dim SourcePath as String, Dim DestinationPath as String
    
   SourcePath = "" 'Your Network folder
   DestinationPath = "" 'Your local folder here

    Me.lblPleaseWhait.Visible = True
    DoEvents
    
    FileCopy SourcePath & "\" & FileName, DestinationPath & "\" & FileName
    
    dOpenFile.OpenFile DestinationPath & "\" & FileName

Exit_copyAndOpenMdbFile:
    Exit Sub

Err_copyAndOpenMdbFile:
    MsgBox Err.Description
    Resume Exit_copyAndOpenMdbFile
    
End Sub
(In my case I give the option of opening a few FEs, all in one folder. you can modify this code)

dOpenFile is a module I found on the web, it opens files according to OS definitions. Just put this code in a module, and name it dOpenFile:
Code:
Option Compare Database
Option Explicit

Declare Function ShellExecute Lib "shell32.dll" Alias "ShellExecuteA" (ByVal hwnd As Long, ByVal lpOperation As String, ByVal lpFile As String, ByVal lpParameters As String, ByVal lpDirectory As String, ByVal nShowCmd As Long) As Long

Public Function OpenFile(sFileName As String)
On Error GoTo Err_OpenFile

    OpenFile = ShellExecute(Application.hWndAccessApp, "Open", sFileName, "", "C:\", 1)

Exit_OpenFile:
    Exit Function

Err_OpenFile:
    MsgBox Err.Number & " - " & Err.Description
    Resume Exit_OpenFile

End Function

Public Function TestOpeningFile()
On Error GoTo Err_TestOpeningFile

    OpenFile "C:\Windows\Win.ini"
    
Exit_TestOpeningFile:
    Exit Function

Err_TestOpeningFile:
    MsgBox Err.Number & " - " & Err.Description
    Resume Exit_TestOpeningFile
    
End Function

You stil distribute files by just putting them in the network location.
ATB
 
We have split again for time being, problem is when we split the database the reports take absolutely ages to run, if I could speed up the reports then the split might work.
 
Did you split to a local location, or network?
from local location things should run better.
 
Sorry, I know no better advice.
What you are looking for is like building a hospital under a broken bridge, instead of fixing it.
 
Think I have found a way round. Database is now split, but instead of running reports just running query. Seems to only take a couple of minutes..

Thanks anyway
 
Database is now split, but instead of running reports just running query. Seems to only take a couple of minutes..

Just an observation... If the query is fast and the report is slow, you have a couple of places to look.

First, is the report based on the same query that you say runs fast? Or is it based on the whole table using its own private .Recordsource as a query? In the same vein, is the report opened with report-level filters (as opposed to a .Recordsource query that imposes filtration)? If you base your report on the query (and let the query do all the sorting/filtering BEFORE the report ever kicks in), you might get a speed boost in the report, too.

Second, do you have a LOT of nested groupings in your report? However you cut it, if a long report has a lot of group headers and footers with running-total elements or other special features, that will make the report much slower.

Third, if you have split databases, even if you have corruption now and then, it will be a LOT less than if you had everybody doing a free-for-all on a single combined file. Therefore, your lost time should be drastically reduced by the FE/BE split.

To further minimize the bad effects of sharing even the BE file, look at this idea: For reports and for queries that drive reports, consider checking the properties sheet for each one to assure that they are set for dbOptimistic locking. I would say "No locking" if you had some assurance that nobody would be running reports during the times when your underlying data gets updated, but that usually isn't the case, so Optimistic locking is your next best speed-saver.
 
Are the front ends on each machine, or in the network?
 
If the FE is on each individual machine (like our case), then the file locks associated with the FE file and its contents are still taken out - but since everybody has a separate FE, nobody else sees YOUR FE locks so there is no contention and no deadlock for anything in your FE.

The only locks in contention are with the BE file, and to simply OPEN a database is a "shared read" lock that doesn't block anything. When something in the BE has to be locked, if you used the Optimistic setting, the locks are not taken out until absolutely needed and they are released as soon as possible. (As opposed to pessimistic locks that block things for a longer time.)
 
just an observation,

ensure your tables are properly indexed - if they are not the query will be slow - you should index any field which is frequently used in criteria and/or sorting, with the exception of fields which have a limited range of values (e.g. yes/no).

use of domain functions (dsum, etc) will have a significant effect on performance as will using multivalue and lookup fields and potentially subqueries

Use of an initial * in criteria (e.g. like '*something' or like '*something*') also means the query is unable to use an index so has to do a sequential search and will be slow.

when run locally, you probably won't see much benefit but will be very noticeable when running across a network.
 
CJ, absolutely right. Indexing a large table can make a HUGE difference. Didn't think of that because I've been distracted today.

To amplify CJ's comments, ... if your query includes DLookups, consider asserting proper relationships so that your query can JOIN to its lookup tables rather than running the domain queries.
 

Users who are viewing this thread

Back
Top Bottom