1. - Introduction.
1.1 - Motivation for this “Answer”.
The details “20 users” and “max 5 concurrent users” and a sluggish network are a red flag. A red flag meaning that the application is reaching the workable limit of Ms Access. I have seen a lot of databases reaching this limit, and generating all sorts of problems. I have seen my own databases reaching this limit, and had to take action.
IMHO, the fact that your application is using a file-based database is the main culprit. End of story. In short, I am going to end up in advising to use a server-based database.
But . . . , awaiting such a conversion, what can be done to keep this application afloat.
Warning upfront: you are not going to like what I will be suggesting, there are always drawbacks. But, I have tested and used these approaches myself. Main goal is to keep the stress on the MS Access file as low as possible.
1.2. - Assumptions.
I do not have enough information to limit my answer to your situation. So I am going to make assumptions, and describe several approaches. I am going to put each approach in a different “answer” in order to make it more efficient for writing comments. (Unfortunately, I do not know how to get that done.)
As an example, I am going to talk about a database with tables like “Products_T, ClientOrders_T, OrderLines_T, ClientInfo_T, Addresses_T, ZipCodes_T, AccountingCodes_T, DeliveryHeader_T, DeliveryLines_T”.
2. - Answers / solutions / workarounds / approaches / techniques.
2.1 - Year by year.
Let me assume, the application started on April 10th 2020. Take a copy of the central MS Access database, and rename it to huge_database_yr2020. In this database you are going to delete everything that is not created or edited in the year 2020, or has substantive relationships with year 2020 data e.g. the deliveries in 2021 from order issued in 2020.
Likewise, another copy renaming to “huge_database_yr2021”, and deleting everything not created or edited in 2021, or having relationships with year 2021 data. Same for the years 2022 and 2023.
For the huge_database_yr2024 I would make some changes to the above schema. For instance, keep client information for at least 2 years in the database.
Also, do not forget to “Compact and Repair”. Et voila, the database that the 5 concurrent users edit has become much smaller.
Now, your application must ask which year a user wants to open. The year 2024 can be edited, the other years are read-only. Read-only can be programmed into your application, or you can bluntly set the database file properties to read-only using “Windows Explorer”.
Periodically you need all the information in one database for reporting? Okay, copy the information in the seperate databases into a new “huge_database_allyears”. (You will get some errors about duplicates, but you are able to figure out a solution for this yourself.)
2.2.A - Chopping the database into pieces.
With my example described in paragraph 1.2, I am going to split things up in 4 databases:
A = Products_T, ZipCodes_T, AccountingCodes_T => I assume these do not change a lot, and can be seen as “read only”.
B = ClientInfo_T, Addresses_T => depends on how many clients place multiple orders, or perhaps these data are maintained by a seperate specific group of users.
C = ClientOrders_T, OrderLines_T
D = DeliveryHeader_T, DeliveryLines_T
The database that the 5 concurrent users edit has gotten smaller, but not as much as in 2.1.
But the combined databases still contain all the data for all years.
Um . . . opening a clientorder with its lines (C), automatically means also opening the products (A) and the addresses (B) ?
2.2.B - Chopping the database into pieces.
Um . . . opening a client order with its lines (C), automatically means also opening the products (A) and the addresses (B) ?
Okay, we are going to make it uglier. Database C will get chopped down versions of the other tables, only the most necessary information, for the records effectively used in a relationship.
C = ClientOrders_T, OrderLines_T, Products_someinfo_T, Client_someinfo_T
Frankly, I have used this in combination with 2.1, where the (A) was really huge. The combined solution made the backup procedures more tolerable.
2.3.A - Caching some data.
Since we have already divided the database into several files. Let us keep a “master” of database (A) en (B) on the server, and regularly copy these files to the local disk of the user.
You would think that the tables “Products_someinfo_T, Client_someinfo_T” can disappear, and the needed information could be retrieved from the local disk.
Um . . . my experience is that this does not help much.
2.3.B - Caching most of the data.
Taking it one step further, we also “cache” the database (C) and (D)! The word cache between brackets!
We can run the INSERT and UPDATE sql statements on the local cache file, and store a copy of the statements somewhere so that we can also run them on the central database, say once every 15 minutes.
The central database still stores all information for all years. But manipulation of this central database has been reduced to a bare minimum.
Hey, this is going to create duplicate order numbers.
** Bwah, you define a number range for each user.
** After each insertion, they must “sync” with the central database
I have used this approach for many years! Okay, there are still some ifs and buts, but elaborating on this would take us too far today.
The huge advantage is that all information-only actions all happen on the local disk and do not affect the central database at all.
I warned upfront that there would be some ugly techniques that no one likes!
2.4 - Microsoft Access Replication.
I am sorry, I have no experience with this.
2.5 - Using a custom “MyOwnLockFile”.
I do not like this one at all, it is awful, but I have used it. After all, desperate times call for desperate measures.
I am going to customize the application so that only 2 or 3 users can access the database simultaneously. Therefore, wherever the application starts a connection, I am going to perform the following procedure:
if fileexists(path/MyOwnLockFile_txt) then
read file in memory, see an e.g. a bit downwards ; close file-handle
else
prepare the contents of a new file in memory
end if
if user1 = "" then user1 = ”1 / green / 20240926_131501”
elseif . . . 2 ; elseif . . . 3 ; else
message “I am sorry, you have to wait for other users to quit the application”
exit function
endif
write to “path/MyOwnLockFile_txt” the strings header, user1, user2, user3
e.g.
UserNbr / UserName / StartedConnectionAt
1 / brown / 20240926_131126
2 /
3 / yellow / 20240926_131129
And off course, wherever the application terminates a connection, make sure your data is blank in this “path/MyOwnLockFile_txt” file.
This will generate a lot of other problems, for instance blocking the file “path/MyOwnLockFile_txt” . . . But hey, the database is under less pressure.
It is probably not a long-term solution, but it buys some time.
3. - Conclusion?
This is a list of options, which have served a purpose, but are really not pleasant to handle. Some options have lasted more than 20 years (2.1). Other options usually end up in a server-based database system, often Ms Sql Server (Express), today a hosted version.
For the future I foresee a move to Azure Sql Server (virtual machine in the cloud), and in the long run even to a (off premise) hosted MySql.
.laccdblock file.