Hello!
I'm designing a database for running batch-jobs on my server, for use with everything from encoding video to zipping.
I've made a couple of databases already, but my knowledge is limited. I therefore need some guidelines on how to design this DB.
Functions
The idea is to have a basic form, with several textboxes with options and a button to start the whole process.
The button searches through a specified folder, for a specified filetype(extension). Then grabs fileinfo like path, filesize etc. which is then stored in a table. A batch-file is then generated and executed.
Now, I'm not quite sure what works best. I want to be able to use the data generated within the cmdline window (stuff like speed and time, which is updated continuously) forcalculating the global speed and time details.
What works best - generating and running a command one at a time, or generating one batch for all the files?
Is there a way to grab data from the cmdline output-window - or do I have to use log-files as source?
Ideally, each file would run one at a time. The database would have a textbox or textboxes, which is updated - say, every five seconds - based on the data from the cmdline-window. After each file is encoded, data regarding the process, would be stored in the database and merged for statistical purpose.
Any response appriciated!
I'm designing a database for running batch-jobs on my server, for use with everything from encoding video to zipping.
I've made a couple of databases already, but my knowledge is limited. I therefore need some guidelines on how to design this DB.
Functions
The idea is to have a basic form, with several textboxes with options and a button to start the whole process.
The button searches through a specified folder, for a specified filetype(extension). Then grabs fileinfo like path, filesize etc. which is then stored in a table. A batch-file is then generated and executed.
Now, I'm not quite sure what works best. I want to be able to use the data generated within the cmdline window (stuff like speed and time, which is updated continuously) forcalculating the global speed and time details.
What works best - generating and running a command one at a time, or generating one batch for all the files?
Is there a way to grab data from the cmdline output-window - or do I have to use log-files as source?
Ideally, each file would run one at a time. The database would have a textbox or textboxes, which is updated - say, every five seconds - based on the data from the cmdline-window. After each file is encoded, data regarding the process, would be stored in the database and merged for statistical purpose.
Any response appriciated!