How to make a large project?

Did you inadvertently leave out a UAT environment when describing your process? - so the users have a chance to review/verify the functionality expected, implement and test urgent fixes and set up a change log and to get sign off from the users/project owner?

What you mean with 'UAT enviroment' ?
 
I still don't believe that any database needs to grow month after month, unless it is caused by normalisation issues, and duplication of processes. Therefore the ever expanding project could be dealt differently without needing to add new forms perpetually.

Additionally, as already mentioned,.a project can most likely be split into sections that are really independent of each other.
 
UAT=User Acceptance Testing

you may know it as something else.

For anything professional it is the final step before releasing the app to production (After the app has passed all the user tests)

The developer should be aware of what those tests are and build the app accordingly. The tests are typically specified from a mixture of the original specification (which might be modified by CR's (Change Requests) and US's (User Stories) - clearly the developer will do their own tests before releasing to UAT but since they are the developer they can be blind to some of the potential issues. For example, I mainly navigate with the mouse and rarely use keyboard navigation - I can forget that some users are the other way round and rarely use the mouse - I can be aware and test, but can still miss something
 
My current specific application is of no importance
The important thing is how to make a project with a number of objects that exceeds the size of the objects contained in a single Access file

Several posts ago I gave you this answer. Identify the parts by thorough analysis to identify functions that can be segregated. Design your project to perform that segregation into discrete .ACCDB files. Check for the interactions between the distinct parts. PLAN and specify the nature of their interactions, which COULD be two FE files sharing the same BE file, or it could be an external transaction file written by one app and read by another. Recognize EACH / ALL of those cases where the different parts overlap and carefully define their method of overlap.

Nowhere in this brief explanation did I limit the number of Access back-end or side-end or front-end files would be used. Your only limit will be the machine or machines used to host this mega-app. You want details? This is YOUR project. YOU have to fill in the blanks.
 
Several posts ago I gave you this answer. Identify the parts by thorough analysis to identify functions that can be segregated. Design your project to perform that segregation into discrete .ACCDB files. Check for the interactions between the distinct parts. PLAN and specify the nature of their interactions, which COULD be two FE files sharing the same BE file, or it could be an external transaction file written by one app and read by another. Recognize EACH / ALL of those cases where the different parts overlap and carefully define their method of overlap.

Nowhere in this brief explanation did I limit the number of Access back-end or side-end or front-end files would be used. Your only limit will be the machine or machines used to host this mega-app. You want details? This is YOUR project. YOU have to fill in the blanks.
In essence it becomes / is similar to systems integration project - multiple (departmental) systems that need to exchange data. This may be done through the one database/ shared tables, however clear rules around which application is the owner of the data in the sense of right to create modify delete, or through real-time data exchange - ie messaging - through use of a defined standard such as HL7 as used in health apps, where transactional events trigger the exchange (and automated requests) or through supply of data in batch extracts, triggered by events - to an aggregating application supporting reporting / performance analysis tasks.
 
In essence it becomes / is similar to systems integration project - multiple (departmental) systems that need to exchange data. This may be done through the one database/ shared tables, however clear rules around which application is the owner of the data in the sense of right to create modify delete, or through real-time data exchange - ie messaging - through use of a defined standard such as HL7 as used in health apps, where transactional events trigger the exchange (and automated requests) or through supply of data in batch extracts, triggered by events - to an aggregating application supporting reporting / performance analysis tasks.

When I worked with the U.S. Navy, we had 18 different computers that exchanged data. When I started, networking wasn't QUITE as advanced as it later became, so we had a daily, weekly, or other periodic "tape cut" where we built a transaction file that would be sent to some agency and we would get back a response to that file PLUS their proposed changes. As time passed, we changed to use SFTP as our transfer method rather than putting sealed tape reels in a Navy truck to schlep them to airplanes for transport to other systems. Networking radically reduced our operational costs because before then, the sailor who drove the trucks was on special duty - classified data courier. Because these tapes also involved personal info, we had requirements that the sailor would be armed or would have an armed guard. Involving extra, higher-ranked thus higher-paid people.

Everyone knew which machine was "authoritative" and deferred to its files as gospel. For instance, my assigned major system was U.S. Navy Reserve Personnel issues for duty station assignment, rank/rate changes, billets, and dependent data. We would send our data to, for instance, DFAS - Defense Finance Accounting System - for any monetary transactions, for which THEY were authoritative. Another department handled training requirements authoritatively.

In line with how that applies to this thread, we rarely had trouble processing what our own people entered. Our problems were with the interface files, the overlapping points of contact, and situations where someone violated procedures leading to transaction rejections. amorosik's problem will be to find HIS points of overlap, closely define what transactions are allowed, and assure that both apps "understand" the rules.
 
Stages of a Contract
Enthusiasm
Disillusionment
Panic
Search for the guilty
Punishment of the innocent
Praise and advancement for the uninvolved

At the end of the day, how good the result is will depend on how good you are.
If you read the postings in this thread and understand them, then that will be beneficial.
 
UAT=User Acceptance Testing

you may know it as something else.

For anything professional it is the final step before releasing the app to production (After the app has passed all the user tests)

The developer should be aware of what those tests are and build the app accordingly. The tests are typically specified from a mixture of the original specification (which might be modified by CR's (Change Requests) and US's (User Stories) - clearly the developer will do their own tests before releasing to UAT but since they are the developer they can be blind to some of the potential issues. For example, I mainly navigate with the mouse and rarely use keyboard navigation - I can forget that some users are the other way round and rarely use the mouse - I can be aware and test, but can still miss something
Having said that, you also have to realise that users are likely to be conditioned by their previous experience and won't necessarily recognize and value all improvements at first sight.

The real problem is that developers understand development, but don't understand business practice, and the needs of a specific organisation. Users don't understand what they really require.

So developers might deliver a product that meets the specification, but really doesn't satisfy the clients real requirements.

And that's why UAT is often a waste of time. Sorry to have a jaundiced view.

It's also how many projects end up in the way @Cotswold just described.

The best way to complete a project is to get a contractor who understands your business and substantially leave them to it. The trouble is finding the right contractor.
 
Last edited:
Yes of course, this does not mean that I have one super-large project
But this consideration has no importance in order to solve the problem posed
I could actually have the problem to solve, or I could not have it, this thing has no importance in order to solve the problem posed
Which honestly seems to me to be of interest to everyone
In this sense I do not understand the reason why I see answers that try to belittle the need to use a more or less standard technique that allows you to overcome the limits (not very large I would say) that are imposed by the single Access file
It's only of academic interest because I expect a vanishingly small number of us have ever been involved in projects that conceivably could get big enough to challenge the limits of access, and no doubt we would have a strategy to deal with it if it became necessary, or more likely well before it got that big. But we wouldn't have projects that expanded every month for no particular reason, which is what I thought you were saying.

It's one thing expanding because the client has additional requirements. You made it sound like a real project was expanding, possibly because of poor design.

Yes, I've been involved in rewriting projects that were poorly written and queries got too big to run. Once they were rewritten in a properly normalised fashion, the problems went away, and more importantly the rewritten project was able to deal with things the client had never considered previously.

And the main reason the project was designed so poorly was because the client said "I want this", and the developer gave them what they asked for, instead of what they really needed.
 
And the main reason the project was designed so poorly was because the client said "I want this", and the developer gave them what they asked for, instead of what they really needed.

In a very strong sense, you take on the responsibility of telling the customer he's an idiot if you take the job even AFTER he asks for idiotic things. How diplomatically you can do so is up to you.
 
In a very strong sense, you take on the responsibility of telling the customer he's an idiot if you take the job even AFTER he asks for idiotic things. How diplomatically you can do so is up to you.
Well you can give him what looks like he asked for, but design in it a proper normalised way, so that when the client wants more complications you have a framework that enables the growth.
 
Wow, 138 posts now.

OP, are you ever going to mark this as solved?
Surely trying to understand what the real limits of the objects that can be managed by a single Access file are, because from some experiments already performed it seems that they are not so well defined
Because they are impossible to define. Too many factors are at play.

If someone tells you it's 1k forms, what if all of those forms have 1 label each? Do you think your computer is going to struggle? Likely not, if you have a high-end computer, but if you have a computer running with the typical specifics to operate in 1995, it is very likely to give you problems.

On the other hand, what do you think would be the result of having 1000 open forms that:
- have X web browser controls connected to websites that hug the CPU
- have Y tables with one million records loaded, each
- have Z combo box controls with complex nested queries each grabbing tens of thousands of records each
- have N subforms, with multiple visual effects and timers running at the same time?
- have a few media players playing 4k videos
- and for every move of the mouse, a sound is played and all forms are reacting to each other?

Now, what do you think it would be the result if you try to operate in that environment with a computer with 256MB of RAM and 3GB of disk space? On the other hand, do you think a carefully crafted setup that can withstand the computing power of serving 100 million users worldwide generating AI videos would even blink with that?

Will you still say that the specifics have no importance for the question posed?

These are the initial steps, and then all the others related to the techniques to allow the project to be updated without causing disturbance in the normal operation of the user
If you want to know what's going to disturb normal operation, define normal operation.
 
Wow, 138 posts now.

OP, are you ever going to mark this as solved?

Because they are impossible to define. Too many factors are at play.

If it is not possible to use the 'number of objects' to understand the technical limit that can be contained in a single Access file, then it means that other conditions will have to be checked, such as the memory used by that single Access file during operation, or other conditions still
 
It's only of academic interest ...

Even if it were just an academic interest, the topic would still be of great value to someone who uses Access
Even a linked list or a bubble sort, are purely academic interests (unless one builds a db server or similar), but are still of great interest to understand how things work in certain fields
 
then it means that other conditions will have to be checked, such as the memory used by that single Access file during operation, or other conditions still
That was wat I said way back when.
 
If it is not possible to use the 'number of objects' to understand the technical limit that can be contained in a single Access file, then it means that other conditions will have to be checked, such as the memory used by that single Access file during operation, or other conditions still
Testing has been mentioned multiple times. Here's what ISTQB, an authority in the subject, says about that:

1. Testing shows the presence, not the absence of defects. Testing can show that defects are present in the test object, but cannot prove that there are no defects (Buxton 1970). Testing reduces the probability of defects remaining undiscovered in the test object, but even if no defects are found, testing cannot prove test object correctness.
2. Exhaustive testing is impossible. Testing everything is not feasible except in trivial cases (Manna 1978). Rather than attempting to test exhaustively, test techniques (see chapter 4), test case prioritization (see section 5.1.5), and risk-based testing (see section 5.2), should be used to focus test efforts.
3. Early testing saves time and money. Defects that are removed early in the process will not cause subsequent defects in derived work products. The cost of quality will be reduced since fewer failures will occur later in the SDLC (Boehm 1981). To find defects early, both static testing (see chapter 3) and dynamic testing (see chapter 4) should be started as early as possible.
4. Defects cluster together. A small number of system components usually contain most of the defects discovered or are responsible for most of the operational failures (Enders 1975). This phenomenon is an illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be modified, and new tests may need to be written. However, in some cases, repeating the same tests can have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the defects found could still produce a system that does not fulfill the users’ needs and expectations, that does not help in achieving the customer’s business goals, and that is inferior compared to other competing systems. In addition to verification, validation should also be carried out (Boehm 1981).

🤷‍♂️
 
Last edited:
@Edgar_ ... I see you are quoting Barry Boehm among your other sources. I used to have a couple of his books on project management. Sadly, I lost some of those excellent reference works in 2005 in the aftermath of Hurricane Katrina's flooding. However, I remember that Boehm had a LOT of good things to say about testing as well as on pre-planning and about careful selection of project members who were familiar with the project's environment - as opposed to dragging someone "off the street" and expecting rapid productivity. He was big on "learning curves" as well as testing, as I recall.
 
Testing has been mentioned multiple times. Here's what ISTQB, an authority in the subject, says about that:

1. Testing shows the presence, not the absence of defects. Testing can show that defects are present in the test object, but cannot prove that there are no defects (Buxton 1970). Testing reduces the probability of defects remaining undiscovered in the test object, but even if no defects are found, testing cannot prove test object correctness.
2. Exhaustive testing is impossible. Testing everything is not feasible except in trivial cases (Manna 1978). Rather than attempting to test exhaustively, test techniques (see chapter 4), test case prioritization (see section 5.1.5), and risk-based testing (see section 5.2), should be used to focus test efforts.
3. Early testing saves time and money. Defects that are removed early in the process will not cause subsequent defects in derived work products. The cost of quality will be reduced since fewer failures will occur later in the SDLC (Boehm 1981). To find defects early, both static testing (see chapter 3) and dynamic testing (see chapter 4) should be started as early as possible.
4. Defects cluster together. A small number of system components usually contain most of the defects discovered or are responsible for most of the operational failures (Enders 1975). This phenomenon is an illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be modified, and new tests may need to be written. However, in some cases, repeating the same tests can have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the defects found could still produce a system that does not fulfill the users’ needs and expectations, that does not help in achieving the customer’s business goals, and that is inferior compared to other competing systems. In addition to verification, validation should also be carried out (Boehm 1981).

🤷‍♂️
No 7. Is what I said. The design commissioners and specifiers may not understand the business needs well enough, and may not understand the development possibilities that are available to produce an optimal (or tending to optimal) solution. The developers may not understand the real requirements.
 
Even if it were just an academic interest, the topic would still be of great value to someone who uses Access
Even a linked list or a bubble sort, are purely academic interests (unless one builds a db server or similar), but are still of great interest to understand how things work in certain fields
But this is completely different to choosing an inefficient sorting algorithm.

I don't believe it is of any use to consider the question "what will we do if the problem is too complicated for MS Access". We are doing that continually in a microcosm. Occasional we get queries that are too large to be processed, and we find a way round it empirically.


You seem to be insistent upon arguing about the difference between 2 infinities as it were. I have never had a time when Access has said "this project is now too big". If that happens I will deal with it, probably by dividing the project into two or more parts. Until that happens it's academic, and not really interesting.
 
@amorosik - This post is a serious request to review what level of help you have received to date, based on nearly 150 posts on a technical question.

From this forum, you have gotten answers from:

people with an aggregate of at least 200 years of programming experience, probably more than 300 years worth
people with public sector project experience, government project experience, and private business experience
people who hold advanced degrees and with specialty certificates in various topics
people who have started and maintained their own personal businesses
people who have held government security clearances, implying their level of expertise in their field.

But your responses suggest that the above level of resources was inadequate for your needs.

You have continually claimed that you should be able to get specific answers without defining the specifics of the problem. We have disagreed with you and explained why we did so. Members of this forum have tried to help you with advice commensurate with your description of your question. You have continually responded to our answers with a "yes, but..." approach, signifying that you find our advice inadequate, or that you believe we must have forgotten something. While you deny it, MANY of us feel as though your responses are evasive.

Given the kind of diversity and level of experience of our responding members, do you perhaps ever so faintly begin to think that MAYBE the problem is within YOU? Do you think that MAYBE you really don't understand the level of question you are asking? This is not meant to be taken as an argumentum ad authoritatem discussion nor an argumentum ad hominem attack on you.

Please explain why the responses you got so far are not good enough for you. If this forum cannot answer your questions then the only other way we can help you is to direct you to resources you will need to get those answers... if we ourselves know the answers.
 

Users who are viewing this thread

Back
Top Bottom