How to test program before releasing new version? (1 Viewer)

Pat Hartman

Super Moderator
Staff member
Local time
Today, 05:03
Joined
Feb 19, 2002
Messages
43,293
The ONLY value that most testing products add is that they take good statistics over repeated runs and make pretty reports.
They are also good for regression testing. Every time you make a change, you run the test deck again. If you add features that need testing, you add tests to the test deck.
Of course, it is certain that it is necessary to test a project adequately, especially if it is of large dimensions
The size of the product is irrelevant. Even the smallest of applications must be tested.

For small projects, testing generally falls to the developer. It is very difficult for a developer to properly test something he wrote. He know how it is supposed to work and so tends to always do things the right way. One example is the developer made the mistake of putting validation in the control's BeforeUpdate event rather than in the form's BeforeUpdate event. You have two date fields and the second must be > the first. Normally, the user will just tab through the controls and so date 1 gets entered and then date 2 gets entered and the code to compare the two dates runs. But what happens if the user enters date 2 first? What happens if he goes back and changes date 1 later on. Because of the incorrect placement of the validation code, those errors will allow bad data to slip through. The developer is unlikely to ever discover this error.

When I test forms created by others. I always start by entering data backwards and I try to save after each new piece is entered to see if I get an error or if I can save incomplete records.

All applications need to have user testing before being released to the general population. And it would be best if there are multiple developers on the team for them to swap testing duties and test for each other.

One thing that I do when testing applications is to write queries that look for bad or missing data. Those are the final step. Even the test BE should never have bad data.
 

Josef P.

Well-known member
Local time
Today, 11:03
Joined
Feb 2, 2023
Messages
827
It think, it is much more worth your while to put effort into code level tests like unit tests or integration tests.
If you follow the single-responsibility principle (SRP) when writing code, you should be able to cover the most important code parts of an application with this "simple" tests.

I personally see this as an advantage of TDD: you have to write code that is testable. SRP then usually results by the way. :)

I'm trying to figure out how this thing works :unsure:
Let's make a small example. You create a function that you want to test and give some example of the function return with the appropriate parameters. Then we design a few tests together for this function.

You have two date fields and the second must be > the first. Normally, the user will just tab through the controls and so date 1 gets entered and then date 2 gets entered and the code to compare the two dates runs. But what happens if the user enters date 2 first? What happens if he goes back and changes date 1 later on. Because of the incorrect placement of the validation code, those errors will allow bad data to slip through. The developer is unlikely to ever discover this error.
This is a good example that a developer won't check via tests if she/he doesn't think of the possibility that this can happen like this on input.
And if he thinks that this input order can be like this, he doesn't have to test it anymore, because then he does the check correctly.
If this is a data rule, hopefully this is also set in the table => simple testable.
 
Last edited:

amorosik

Member
Local time
Today, 11:03
Joined
Apr 18, 2020
Messages
390
Let's make a small example. You create a function that you want to test and give some example of the function return with the appropriate parameters. Then we design a few tests together for this function.

Ok, function is

Code:
Public Function Debug_print_file(ByVal messaggio As String, Optional messaggio2 As String)
    On Error Resume Next
    nome_file_log = Application.CurrentProject.Path & "\log_tempi.txt"
    num_file = FreeFile()
    Open nome_file_log For Append As num_file
    Print #num_file, messaggio & " " & messaggio2 & Chr(13)
    Close num_file
End Function

And one or two strings containing a message to be written into the file are provided as parameters
How to test it ?
 

ebs17

Well-known member
Local time
Today, 11:03
Joined
Feb 7, 2020
Messages
1,947
Code:
On Error Resume Next
If you use this as standard, you will ALWAYS have huge problems. Describe yourself why!

In the spirit of the above, I would use a method that writes something to a text file.
Code:
Public Sub Append2TextFile(ByVal Content As String, Optional ByVal TextFile As Variant)
    Dim FF As Long
    If IsMissing(TextFile) Then TextFile = CurrentProject.Path & "\Log.txt"
    FF = FreeFile()
    Open TextFile For Append As FF
    Print #FF, Content
    Close #FF
End Sub
Code:
' call
Append2TextFile messaggio, CurrentProject.Path & "\log_tempi.txt"
Append2TextFile messaggio2, CurrentProject.Path & "\log_tempi.txt"
 

amorosik

Member
Local time
Today, 11:03
Joined
Apr 18, 2020
Messages
390
No, it's definitely not the common way to handle errors

The example you gave only allows you to start the Append2textFile function
It does not perform any checks on the correct result of the function itself
 

ebs17

Well-known member
Local time
Today, 11:03
Joined
Feb 7, 2020
Messages
1,947
Before testing, you should think about what possible errors could occur. The method appends content to the content of an existing file. If the file does not exist, it will be created automatically. This can fail if you do not have write access to the corresponding folder or a connection is temporarily interrupted.
Something like this could be checked within the method, but preferably on the argument before passing. You should definitely have write permissions for CurrentProject.Path.
The second conceivable source of error could be that something cannot be passed as a string (number, true, date, NULL, zerostring).
Finally, you should check whether what you hand over actually ends up in the file => read the file contents, compare the last section with the handover, pay attention to duplication.

You can write all of this yourself.
 

amorosik

Member
Local time
Today, 11:03
Joined
Apr 18, 2020
Messages
390
Yes of course, so following this example to accurately test a simple procedure of about ten lines you would have to write much superior code as an extension
This is a possibility
But I think it is impractical due to the immense effort required in making the code to test against the program code
I thought there were some specific tools capable of speeding things up, but above all I thought it was normal for large-scale projects
But instead it seems to me that everyone does their own tests, manually, and then releases the code into production.
 

theDBguy

I’m here to help
Staff member
Local time
Today, 02:03
Joined
Oct 29, 2018
Messages
21,474
But instead it seems to me that everyone does their own tests, manually, and then releases the code into production.
Actually, I think what was said was that the developer does "some" tests and then releases the code to the TEST environment first - not "production." No?
 

sonic8

AWF VIP
Local time
Today, 11:03
Joined
Oct 27, 2015
Messages
998
But I think it is impractical due to the immense effort required in making the code to test against the program code
I really depends.
Yes, it appears to be (a lot of) additional work and sometimes it actually is.
However, it also can be a lot of effort to test a complex routine manually over and over again while working on the routine.
If you compare the effort of writing test code once and then being able to re-run the test in an instant again and again until requirements change to the effort of manual testing each and every time you made a change to the code, it might turn out that the additional work for writing the test is much less effort than manual testing.

Also, be aware that testing code, which is depending on external resources, like a remote service, a database, or the file system, can indeed be pretty complex. Hardcore TDD enthusiasts will argue that you should be able to replace these external resources with mock/fake objects to simplify testability. - I don not (fully) agree with that.
Anyways, testing a simple function that does a calculation and returns the result is much simpler. The effort for testing this kind of code is next to nothing. Particularly, if you already wrote several function calls to the Immediate Window to test while writing the function. You can then just copy those lines from the Immediate Window to a test without any additional effort.
 

Josef P.

Well-known member
Local time
Today, 11:03
Joined
Feb 2, 2023
Messages
827
Ok, function is

Code:
Public Function Debug_print_file(ByVal messaggio As String, Optional messaggio2 As String)
    On Error Resume Next
    nome_file_log = Application.CurrentProject.Path & "\log_tempi.txt"
    num_file = FreeFile()
    Open nome_file_log For Append As num_file
    Print #num_file, messaggio & " " & messaggio2 & Chr(13)
    Close num_file
End Function

And one or two strings containing a message to be written into the file are provided as parameters
How to test it ?
Note: Before writing a test for this, Option Explicit should be added to the CodeModule first. ;)

What do you want to test from this function?
Or asked another way: what needs to be tested in this function?

Anyway, here is an example of a test class:
Code:
Option Compare Text
Option Explicit

'AccUnit:TestClass

' Call from immediate window:
'    TestSuite.AddByClassName("Debug_print_file_Tests").Run

'--------------------------------------------------------------------
' Test Preparation / Cleanup
'--------------------------------------------------------------------
Public Sub Setup()
    RemoveTextFileIfExits
End Sub

Public Sub TearDown()
    RemoveTextFileIfExits
End Sub

'--------------------------------------------------------------------
' Tests
'--------------------------------------------------------------------

'AccUnit:Row("abc", "xyz")
'AccUnit:Row("abc", "")
'AccUnit:Row("", "xyz")
Public Sub DebugPrintFile_NewFile_CompareText(ByVal messaggio As String, messaggio2 As String)
 
    Dim Expected As String
    Dim Actual As String
 
    Debug_print_file messaggio, messaggio2
 
    Expected = messaggio & " " & messaggio2 & Chr(13) & vbNewLine
 
    Actual = GetTextFromFile
 
    ' Assert
    Assert.That Actual, Iz.EqualTo(Expected)
 
End Sub

'AccUnit:Row("abc", "xyz")
'AccUnit:Row("abc", "")
'AccUnit:Row("", "xyz")
Public Sub DebugPrintFile_NewFileAndAppend_CompareText(ByVal messaggio As String, messaggio2 As String)
 
    Dim Expected As String
    Dim Actual As String
 
    Const FirstEntryText As String = "xxx"
 
    ' create 1. entry
    Debug_print_file FirstEntryText
 
    ' create 2. entry
    Debug_print_file messaggio, messaggio2
 
    Expected = FirstEntryText & " " & Chr(13) & vbNewLine & _
               messaggio & " " & messaggio2 & Chr(13) & vbNewLine
 
    Actual = GetTextFromFile
 
    ' Assert
    Assert.That Actual, Iz.EqualTo(Expected)
 
End Sub

'--------------------------------------------------------------------
' Test helper
'--------------------------------------------------------------------

Private Sub RemoveTextFileIfExits()
    Dim FileToCheck As String
    FileToCheck = TextFilePath
    If Len(VBA.Dir(FileToCheck)) > 0 Then
        Kill FileToCheck
    End If
End Sub

Private Property Get TextFilePath() As String
    TextFilePath = Application.CurrentProject.Path & "\log_tempi.txt"
End Property

Private Function GetTextFromFile() As String

    Dim FileNum As Long
    Dim TextFromFile As String
 
    FileNum = FreeFile()
    Open TextFilePath For Input As FileNum
    TextFromFile = Input$(LOF(FileNum), FileNum)
    Close FileNum

    GetTextFromFile = TextFromFile

End Function

The example is for AccUnit. But it can certainly be used similarly for Philipp's accessUnit(fork).
Unfortunately, the example does little to show the benefits of testing, since nothing can actually go wrong in the procedure.
The only thing that could go wrong is writing to the file due to lack of permissions and that is unfortunately not testable.
 

Attachments

  • TestExample.zip
    25 KB · Views: 54
Last edited:

Pat Hartman

Super Moderator
Staff member
Local time
Today, 05:03
Joined
Feb 19, 2002
Messages
43,293
The automated test platforms are designed to test a fully contained procedure. You provide the inputs, the platform runs the code and displays the outputs. You can create all the input variations you want and you can run the tests every time you change the code. If the output ends up in a table, you check the table after each test.

Testing interactive processes like a form is much more complex. For example, they might be able to map an input file to form fields but I doubt they could support data entry order as in the example I mentioned earlier.

You need to train your people how to test and as long as you have multiple team members, make them swap and test each others objects.
 

ebs17

Well-known member
Local time
Today, 11:03
Joined
Feb 7, 2020
Messages
1,947
If a software could independently test everything that is presented to it, it would be better to write the application itself.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 04:03
Joined
Feb 28, 2001
Messages
27,191
Once you get past the point of realizing that a complex program will require complex testing that can be automated only to a limited degree, you have basically a few paths. Either (a) you write your code so that if an error occurs, you can either handle it or at least make a record of it for later review; or (b) you write a customized complex test driver and framework; or (c) you build a manual test specification and get a team of testers; or (d) you painfully realize that the VERY BEST testers in the world are your clients - because THEY will somehow IMMEDIATELY find weaknesses in your offering, weaknesses you never even IMAGINED to exist, weaknesses you didn't even believe COULD exist.

We've all been there to some degree or another. We have all learned the "law of conservation of inadequacy"

insufficient preparation = inadequate results

There is no short-cut to quality. But there are MANY detours that lead to undesirable results. And, being experienced, I know many of them first-hand.
 

jdraw

Super Moderator
Staff member
Local time
Today, 05:03
Joined
Jan 23, 2006
Messages
15,379
I agree Doc. There has been another method to find security weaknesses. ;)
Get a few students and inform them to exercise the system after advising them Nobody can break this -it's solid.
 

Josef P.

Well-known member
Local time
Today, 11:03
Joined
Feb 2, 2023
Messages
827
Ehhh ??? And then we can also go home, sit on the sofa and turn on the TV
😅
Counter question: How do you want to check on your PC (with your user) with an automated test whether an error occurs with any other user because access rights are missing in the file system?
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 04:03
Joined
Feb 28, 2001
Messages
27,191
I agree Doc. There has been another method to find security weaknesses. ;)
Get a few students and inform them to exercise the system after advising them Nobody can break this -it's solid.

As a matter of fact, that was how OpenVMS was tested for vulnerability to network incursions. The most recent part of that saga: HP owned the O/S, so they went to the hacker's convention known as DefCon IV and set up a fully functional, fresh-out-of-the-box OpenVMS, probably version 8.3 or 8.4 at the time, and said, "Nobody can break in through the network." They were right, by the way. You CAN crack an OpenVMS but not from the outside. Has to be done by an insider. But that is how OpenVMS was debugged many times. Tell a hacker "You can't crack it." Eventually that became true.
 

Josef P.

Well-known member
Local time
Today, 11:03
Joined
Feb 2, 2023
Messages
827
I think one should not mix testing of interfaces etc. based on the specifications with security testing.
Testing the interfaces of classes/procedures can be done relatively well with automated tests.
The decisive factor here is that you know the requirements.

Let's take the Access.BuildCriteria function as an example:
Code:
Option Compare Database
Option Explicit

'AccUnit:TestClass

' Call from immediate window:
'    TestSuite.AddByClassName("BuildCriteriaTests").Run

'AccUnit:Row("T", "=abc", "T=""abc""")
'AccUnit:Row("T", "abc", "T=""abc""")
'AccUnit:Row("T", "ab'c", "T=""ab'c""")
'AccUnit:Row("T", "abc or xyz", "T=""abc"" Or T=""xyz""")
Public Sub BuildCriteria_dbText_CheckStringResult(ByVal FieldName As String, ByVal Expression As String, ByVal Expected As String)

    Dim Actual As String
    Actual = Access.BuildCriteria(FieldName, dbText, Expression)
    Assert.That Actual, Iz.EqualTo(Expected)

End Sub
The Row tests describe/test the expected behavior.

[OT]
What would be the expected value when passing ab"c as an expression?
I would then expect T="ab""c" as the result. Unfortunately, it is not so. ;)

'AccUnit:Row("T", "ab""c", "T=""ab""""c""")
=>
BuildCriteriaTests.BuildCriteria_dbText_CheckStringResult.5 Failure *** actual is greather then expected
Expected: T="ab""c"
but was: T="ab"c"
 
Last edited:

Auntiejack56

Registered User.
Local time
Today, 19:03
Joined
Aug 7, 2017
Messages
175
All great ideas for testing. As a former release manager, I note that the OP mentioned 'the moment of release', and confined the question to functional verification.
So if I may, I'd add the following: consider running something like Access Detective to compare your new version against Prod. And then go thru the difference reports line by line to make sure there is no odd code in your changes, and that all expected changes are there. There are many reasons why this can inadvertently happen, most particularly if you have a team of Access developers all working multiple change requests for a single release.
Jack
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 04:03
Joined
Feb 28, 2001
Messages
27,191
All great ideas for testing. As a former release manager, I note that the OP mentioned 'the moment of release', and confined the question to functional verification.
So if I may, I'd add the following: consider running something like Access Detective to compare your new version against Prod. And then go thru the difference reports line by line to make sure there is no odd code in your changes, and that all expected changes are there. There are many reasons why this can inadvertently happen, most particularly if you have a team of Access developers all working multiple change requests for a single release.
Jack
Which implies carefully keeping older versions, therefore making this path a form of change-control or version-control or configuration control.

This entire discussion has centered around using large-project methods to plan for testing, manage configurations, document project goals, and various strategies to achieve project stability in a living environment. Which is about what many of us have been saying all along.
 

Users who are viewing this thread

Top Bottom