Assertain record / output file size

jens

New member
Local time
Today, 00:47
Joined
Jun 28, 2007
Messages
5
Hi, I need to work out a sound way of determining the projected size of a csv file output. The generated file must be under 200k. Checking the created file is easy but the process is currently all wrong.

I limit the output query to 140 records each pass (which brings each file in at around 180k) but this is dependant on the stored information staying within reasonable parameters. The records have a memo field and other large fields that could easily contain more than average data and make the current restriction method nonsense. Due to other issues it would also be much better to have far less margin on the size and utilise close to the full 200k.

Is there a method to measure a record size in bytes (without parsing each field!) or can someone point me in a better direction please?

If I do have to character count each field of each record in the query result (with the idea of then rerunning and increasing the record number parameter) what would be the most efficient and sensible way of handling this?

Thank you
 
Two questions…

What will you do if the file size exceeds 200k.
How are you currently producing the csv file?
 
Hi Chris

If I can test the potential output size then I will simply reduce the parameter passed to the query thus reducing the total number of records output for that pass so that the file does not exceed 200k. At the moment a created filesize >197k just throws a warning and we have to deal with the issue manually.

The output method is below.

DoCmd.TransferText acExportDelim, "ourexportspecs", "ourexportquery", strFilename, -1, , 65001

The DoCmd.OutputTo method fails to use the specified unicode codepage which is critical (verified Access Bug) thus we use the above method.

The code takes the last record ID of the first query result and passes that as a query parameter so that the next query output begins from there and so on.
 
Last edited:
I'll throw my 2 cents in. When I've had to limit file size outputs I use my own export methods because Access does not quite have anything that works very well. Here is a quick sample of something I might do to export CSV's with a specified size limit, this would export all the records to files in a seqeuntial number order, obviously this probably won't be what you need exactly, but it may give you ideas. HTH

Code:
Function testCSV(qName As String, csvPath As String, csvName As String, Optional sizeLimit As Long = 204800)
Dim rs As DAO.Recordset, fld As DAO.Field
Dim outSTR As String, fHDL As Double, seqNum As Long
 
Set rs = CurrentDb.OpenRecordset(qName)
Do Until rs.EOF
    For Each fld In rs.Fields
        outSTR = outSTR & fld.Value & Chr(44)
    Next
    outSTR = Left(outSTR, Len(outSTR) - 1) & vbCrLf
 
    If Len(outSTR) >= (sizeLimit * 0.95) Then
        fHDL = FreeFile
        Open csvPath & csvName & seqNum & ".csv" For Binary Access Write As fHDL
            Put fHDL, , outSTR
        Close fHDL
        seqNum = seqNum + 1
        outSTR = ""
    End If
rs.MoveNext
Loop
If Len(outSTR) > 0 Then
    fHDL = FreeFile
    Open csvPath & csvName & seqNum & ".csv" For Binary Access Write As fHDL
        Put fHDL, , outSTR
    Close fHDL
    outSTR = ""
End If
Set rs = Nothing
End Function
Sub test()
    testCSV "qry_TEST", CurrentProject.Path & "\", "myCsv"
End Sub
 
Thanks DJKarl this was where I was begining to head and your code has some great starters for me. What I cannot see (or more likely lack the knowledge) is how to ensure the correct codepage for a custom output such as this. Also need to write the first line as field list but have an ADO output example somewhere for that I think.
 
I have not ever used the codepage option on the transfertext command. I moved away from the transfertext command after I found a bug on the import method of it years ago (it made up data in certain instances).

If 65001 is the codepage for UTF-8 unicode then you should be able to use the StrConv function to convert the string to Unicode then write the file. Just keep in mind that unicode formatted files take up 2bytes for every character instead of just 1.
 
Never noticed that constant before. Many thanks!

This should resolve this one but as we have to use other codepages in other areas it would be nice to be able to create a generic function that can cover all.
 

Users who are viewing this thread

Back
Top Bottom