- Local time
- Today, 00:29
- Joined
- Feb 28, 2001
- Messages
- 30,305
If you are building a data stream file then you have a couple of issues.
First, you COULD write a data stream using code to open a file and start pumping character sequences to it. Opening the file for output lets you write anything you want to the file as text, but your problem might be synchronizing your output to the requirements of the receiver of this file.
For instance, does the receiver expect some sort of record delimiter? If this is going to a UNIX system, your records would most often be delimited by a simple LineFeed (vbLF) character. I don't know what Excel wants for CSV file record delimiters, but would not be surprised to find that they like either vbCR, vbLF, or vbCrLf (but only one of those...). If you are sending a CSV formatted file but it is not actually going to Excel, then you need to know how to end a line.
Now, as to how you store that dataset, it might be ugly enough to do something like this:
table TaxMess
Field: PersonKey, LONG (part of compound PK)
Field: ItemKey, Long (part of compound PK)
Field: Datum, String - the value to be displayed for the particular item.
Field: Sep, String - either an empty string (zero length) or a comma.
Then you would have a nested loop, which I will NOT put into code, just pseudo-code
Open File "FUBAR.CSV" for Output Access Lock Read Write as #1 Len=32767
Set RS = CurrentDB.OpenRecordset( "HolyMess", dbOpenDynaset )
For PersK = 1 to MaxPersKey
For ItemK = 1 to MaxItemKey
RS.FindFirst "[PersonKey]=" & Cstr(PersK) & " AND [ItemKey] = " & Cstr(ItemK)
If NOT RS.NoMatch THEN
Write #1, RS.Datum & RS.Sep
End If
Next ItemK
Write #1, " " & {selected delimiter}
Next PersK
Close #1
RS.Close
This style (to be experimented with before you commit your business to it) gives you up to 4 billion persons (range of a LONG) and 4 billion attributes (range of a LONG) -but I'll bet you would use fewer than that. But the idea is that FIRST, you define item keys as the attributes you have to track, and for every person write a record with every item key (even if blank), where you pre-format what it will look like as a string, i.e. CStr(x) if it is a number.
This method WILL NOT allow you to write character strings longer than 255 bytes in a single operation - but if you lay out the item codes so that extended strings are consecutively numbered and the RS.Sep field is empty for strings that need to be concatenated, you could write longer strings by writing the first 255 with blank Sep, the next 255 with blank Sep, etc., and the last segment with "," in Sep.
For cases where you are just converting short numbers, Sep will ALWAYS be "," and for things that might be blanks, the Datam might be blank but Sep will not be.
Think about this as an approach. This essentially converts attributes that would be columns in your virtual long record into numbered short records where one key is the person and one key is the attribute number.
In the cases where you COULD write more than 1000 bytes between commas but in a given record do not have to, both Datum and Sep would be zero-length strings for the items corresponding to later portions of the string.
First, you COULD write a data stream using code to open a file and start pumping character sequences to it. Opening the file for output lets you write anything you want to the file as text, but your problem might be synchronizing your output to the requirements of the receiver of this file.
For instance, does the receiver expect some sort of record delimiter? If this is going to a UNIX system, your records would most often be delimited by a simple LineFeed (vbLF) character. I don't know what Excel wants for CSV file record delimiters, but would not be surprised to find that they like either vbCR, vbLF, or vbCrLf (but only one of those...). If you are sending a CSV formatted file but it is not actually going to Excel, then you need to know how to end a line.
Now, as to how you store that dataset, it might be ugly enough to do something like this:
table TaxMess
Field: PersonKey, LONG (part of compound PK)
Field: ItemKey, Long (part of compound PK)
Field: Datum, String - the value to be displayed for the particular item.
Field: Sep, String - either an empty string (zero length) or a comma.
Then you would have a nested loop, which I will NOT put into code, just pseudo-code
Open File "FUBAR.CSV" for Output Access Lock Read Write as #1 Len=32767
Set RS = CurrentDB.OpenRecordset( "HolyMess", dbOpenDynaset )
For PersK = 1 to MaxPersKey
For ItemK = 1 to MaxItemKey
RS.FindFirst "[PersonKey]=" & Cstr(PersK) & " AND [ItemKey] = " & Cstr(ItemK)
If NOT RS.NoMatch THEN
Write #1, RS.Datum & RS.Sep
End If
Next ItemK
Write #1, " " & {selected delimiter}
Next PersK
Close #1
RS.Close
This style (to be experimented with before you commit your business to it) gives you up to 4 billion persons (range of a LONG) and 4 billion attributes (range of a LONG) -but I'll bet you would use fewer than that. But the idea is that FIRST, you define item keys as the attributes you have to track, and for every person write a record with every item key (even if blank), where you pre-format what it will look like as a string, i.e. CStr(x) if it is a number.
This method WILL NOT allow you to write character strings longer than 255 bytes in a single operation - but if you lay out the item codes so that extended strings are consecutively numbered and the RS.Sep field is empty for strings that need to be concatenated, you could write longer strings by writing the first 255 with blank Sep, the next 255 with blank Sep, etc., and the last segment with "," in Sep.
For cases where you are just converting short numbers, Sep will ALWAYS be "," and for things that might be blanks, the Datam might be blank but Sep will not be.
Think about this as an approach. This essentially converts attributes that would be columns in your virtual long record into numbered short records where one key is the person and one key is the attribute number.
In the cases where you COULD write more than 1000 bytes between commas but in a given record do not have to, both Datum and Sep would be zero-length strings for the items corresponding to later portions of the string.