Hi, I need to work out a sound way of determining the projected size of a csv file output. The generated file must be under 200k. Checking the created file is easy but the process is currently all wrong.
I limit the output query to 140 records each pass (which brings each file in at around 180k) but this is dependant on the stored information staying within reasonable parameters. The records have a memo field and other large fields that could easily contain more than average data and make the current restriction method nonsense. Due to other issues it would also be much better to have far less margin on the size and utilise close to the full 200k.
Is there a method to measure a record size in bytes (without parsing each field!) or can someone point me in a better direction please?
If I do have to character count each field of each record in the query result (with the idea of then rerunning and increasing the record number parameter) what would be the most efficient and sensible way of handling this?
Thank you
I limit the output query to 140 records each pass (which brings each file in at around 180k) but this is dependant on the stored information staying within reasonable parameters. The records have a memo field and other large fields that could easily contain more than average data and make the current restriction method nonsense. Due to other issues it would also be much better to have far less margin on the size and utilise close to the full 200k.
Is there a method to measure a record size in bytes (without parsing each field!) or can someone point me in a better direction please?
If I do have to character count each field of each record in the query result (with the idea of then rerunning and increasing the record number parameter) what would be the most efficient and sensible way of handling this?
Thank you