I think the process, as described so far, is incorrect.
If it is early stages in the life of the database then this is how it did it in the past. With one exception, all of the commercial databases I have created have been ‘databases at a distance’.
What the term ‘databases at a distance’ means is that a front end needs to guarantee its own survival when it runs after being sent to a customer. Guarantee means to create whatever it needs when it runs. When it runs means each and every time it runs.
One important thing needs to be remembered: there is never any destruction only creation. Tables are added never destroyed, fields are added never destroyed, field sizes are only ever increased never decreased, a change in field type is a new field not a change of type of an existing field, if old data is required in the new field it is copied not moved and so on. No destruction at all, redundancy yes but no destruction.
If we look at it that way then there is no need for a version table. What is required is a configuration table which is local to the front end. That table describes all modifications required by all versions up to and including the current version of the FE. Records in the configuration table must remain unchanged for versions other than the current version.
But, once the current FE has been sent to the customer it to must remain unchanged in the configuration table. (If it helps, we can view this as a documented transaction between us and the customer. The document has been sent and we cannot withdraw that document. Any correction to that document must take place as an additional correction to that document, not as a replacement of that document.)
But the requirement is for the FE to bring the BE up to a level in which the FE can survive. It is not required for the FE to go any further other than for what the FE needs for survival.
----------
So this is the process at FE startup…
Disconnect all linked tables.
Open a recordset of the configuration table in the FE.
Apply all version requirements whether they are needed or not.
(Once the change code is stable the only error should be is if the change has already been made. That error can be ignored because the error on change will no affect the existing data and all we are interested in is if the change has been made.)
Re-link the tables.
----------
And now for testing, which is a very important part and best done before the end user sees it.
Take any back end, at any version level, and run the new FE against it. In fact, test any old FE against any BE and it should survive.
The requirement is that, even locally before shipment, the FE should survive without the destruction of any BE data.
----------
In concept then this is about removing all external dependencies from the FE.
The FE should run. The FE should run without external dependencies. The FE should run without references, it should be late bound. The FE should be late bound on tables and fields which the FE can not predict. The FE should run even if the tables and fields do not exist.
If the FE is dependent on tables and fields it should create them. But the FE should not change anything which another version of the FE requires.
----------
A practical solution does not involve jumping to variables at all. It involves thinking about the ongoing process. The ongoing process is about storing the changes required by the FE in a local configuration table in the FE and processing that entire table each and every time the FE starts up. It processes all changes from start to the current level of the FE which requires those changes. The current level is the last level because that is all the configuration table needs. There is no future knowledge of versions required. Hence, the processing of the configuration table is always from start to end.
The code required to process the changes is also in the FE; where else would it be?
The FE looks after itself up to its own version, which is defined in its own configuration table as the last entry, and the related code which is also in the FE.
The FE therefore looks after itself.
----------
It is altogether a different bucket of bolts working on ‘databases at a distance’. ‘Databases at a distance’ require self preservation and that requires self containment which inturn requires isolation from external influences.
And by the way, bollocks to performance issues at startup; you want it to work, not just get a high speed crash.
Chris.