SELECT * INTO TargetTableName if each one was going into a different table.
INSERT INTO TheSingleTable if they are all going into a single table.
Disabling the indexes during big imports is pretty standard practice with databases. That way the indexes only need to be rebuilt once.Fully concur with CJ. Put on the index after the table is built. Turns out the same is true when loading a table in ORACLE.
INSERT INTO destTable
SELECT *
FROM (SELECT * FROM [TEXT;DATABASE=C:\filefoldername;HDR=Yes].sourcefilename.csv) AS txt
dim sf as string
dim db as dao.database
set db=currentdb
sf=dir("C:\filefoldername")
while sf<>""
db.execute("INSERT INTO destTable SELECT * FROM (SELECT * FROM [TEXT;DATABASE=C:\filefoldername;HDR=Yes]." & sf & ") AS txt",dbfailonerror)
sf=dir
wend
Yet another advantage of using SQL Server where disabling and rebuilding indexes is a simple routine process.You can't disable indexes, you need to remove them then reinstate once import completed.
'''''
' arnelgp
'
Private Sub t()
Const conInsert As String = _
"insert into table1 ([Name]) " & _
"select [Name] from [Text;FMT=Delimited(,);HDR=Yes;IMEX=2;ACCDB=YES;DATABASE=the_path].[the_file]"
Dim strInsert As String
Dim strpath As String
Dim strfile As String
'place the path of the textfile here
strpath = "D:\arnel\"
strfile = Dir$(strpath & "*.txt")
Do Until Len(strfile) < 1
'replace "the_path" on the constant string
'with our path
'
'replace "the_file" with the text filename
'
'
'we need to replace "." on the filename with "#"
'... i don't know the why either
strfile = Replace$(strfile, ".", "#")
strInsert = Replace$(Replace$(conInsert, "the_path", strpath), "the_file", strfile)
CurrentDb.Execute strInsert
strfile = Dir$
Loop
End Sub
insert into Tabel1 ([Id], [Veld1],[Veld2],[Veld3]) select F1,F2,F3,F4 from [Text;FMT=Delimited(;);HDR=No;IMEX=2;ACCDB=NO;DATABASE=N:\Mijn documenten\].[bestand1#csv]
insert into Tabel1 ([Id], [Veld1],[Veld2],[Veld3]) select split(F1,';')(0) As T1,split(F1,';')(1) AS T2,split(F1,';')(2) AS T3,split(F1,';')(3) AS T4 from [Text;FMT=Delimited(;);HDR=No;IMEX=2;ACCDB=NO;DATABASE=N:\Mijn documenten\].[bestand1#csv]
[Text;FMT=Delimited(;);HDR=No;IMEX=2;ACCDB=NO;DATABASE=N:\Mijn documenten\].[bestand1#csv]
[Text;FMT=Delimited(;);HDR=No;IMEX=2;ACCDB=NO;DATABASE=N:\Mijn documenten\].[bestand1#csv]
[Text;FMT=Delimited(;);HDR=No;IMEX=2;ACCDB=YES;DATABASE=N:\Mijn documenten\].[bestand1#csv]
[Text;FMT=Delimited(;);HDR=No;ACCDB=YES;DATABASE=N:\Mijn documenten\].[bestand1#csv]
You are assuming a one time import. I was assuming an on going process where "new" files ae imported every "month".I would add that if this is a straight import ie all records across the 2000 files are unique so no need to check if already exists
DateDelim | DateFourDigitYear | DateLeadingZeros | DateOrder | DecimalPoint | FieldSeparator | FileType | SpecID | SpecName | SpecType | StartRow | TextDelim | TimeDelim |
---|---|---|---|---|---|---|---|---|---|---|---|---|
/ | -1 | 0 | 0 | . | ; | 850 | 1 | Bestand1LinkSpec | 1 | 1 | : |
Attributes | DataType | FieldName | IndexType | SkipColumn | SpecID | Start | Width |
---|---|---|---|---|---|---|---|
0 | 10 | FIRST | 0 | 0 | 1 | 5 | 10 |
0 | 4 | LNR | 0 | 0 | 1 | 1 | 4 |
0 | 10 | SECND | 0 | 0 | 1 | 15 | 12 |
0 | 10 | THRD | 0 | 0 | 1 | 27 | 11 |