replicated & non-replicated tables (1 Viewer)

Status
Not open for further replies.

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
I have a mdb with a frontend and a backend. I replicated the be using briefcase and then had to add a few more tables. I now have a be which has replicated and un-replicated tables. When I try to replicate the be, only the previously replicated tables are replicated and not the new tables. What is the problem?

I see, looking through the posts, that briefcase is far from ideal anyway so ideally maybe I should revert to unreplicated tables, but the instructions involve creating make-table queries for each of the tables and there are 60 of them. Is there a quicker way?

I am having a problem updating a query and I think it is because I am using 2 unreplicated tables and a replicated one, in the relationship. Could this be it?

I know there are 3 questions here, but the final one is the one which inspired the others.
 

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
In your design master, just check the Replicable checkbox in the properties for each of the unreplicated tables, then synch with the other replica.

Options for unreplicating that are easier than what you mentioned are outlined in the Jet Replication Wiki FAQ, Question #10:

http://dfenton.com/DFA/Replication/index.php?title=FAQ
 

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
Thanks, checked the replicable box and that worked, but now my database is crashing everytime I try to save it. with a message "ms access has encountered a problem and needs to close." I wonder if this is related to my replication problem?
 

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
I have tried saving another form which uses similar tables and I don't get that message so I think I will have to open another thread. It must be unrelated to replication.
 

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
I thought only your back end was replicated?

Have you compacted your front end? It's important to do so whenever you make structural changes to your back end, replicated or not. Likely you have compiled query plans that have been invalidated by your changes to the back end and that's what's causing the crash. A compact of the front end will cause those compiled query optimizations to be recreated the next time they are run.
 

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
Initially only my back end was replicated but I did a Create Replica and the whole thing became replicated.

One of the options when it crashes is to save and then compact. I have done that and also separately compacted it.

I have got to the stage where it only crashes when I save a single form after I have replaced one query with another, so the problem is probably with that query. I will try to unreplicate the tables involved (3) and see if that helps. I was beginning to suspect that my disc itself is getting corrupted
 

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
Initially only my back end was replicated but I did a Create Replica and the whole thing became replicated.

I don't understand this statement. A "back end" has nothing but data tables in it. A "front end" has forms/reports/modules/etc, and no actual tables, only links to the tables in the back end. Only the back end MDB should ever be replicated, and doing so will have no effect whatsoever on a different MDB file, i.e., the front end.

One of the options when it crashes is to save and then compact. I have done that and also separately compacted it.

I have got to the stage where it only crashes when I save a single form after I have replaced one query with another, so the problem is probably with that query. I will try to unreplicate the tables involved (3) and see if that helps. I was beginning to suspect that my disc itself is getting corrupted

Is your application split or not? If so, the front end should not be replicated at all. You should unreplicate it, compact it and then try again.
 

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
my application is split. I will unreplicate and compact the front end as you suggest.
 

groengoen

Registered User.
Local time
Today, 12:38
Joined
Oct 22, 2005
Messages
141
I found a way to unreplicate the front and back ends and that seems to have worked. Thanks for your help
 

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
Avoid MSAccess Replication at all costs (whether it be frontend or backend). I worked with MSAccess replication for several years and it's nothing but headaches. An external user connecting will eventually cause some problem or another, mainly due to the relationshipships MSAccess tries to maintain by creating ungoldy Autonumber values (such as -23423242322) which if the user loses connection, leads to many issues with orphaned records. Not to mention any replicated frontends where code needs to be synchranized. I often had to restore a backup copy to re-create the 'master' and then synchranize it out to the replicated mdbs.

You're much better off having users connect in via terminal server, VPN, or citrix and then open a frontend off the network drive which has tables linked to the 'same' backend. If this causes problems for slow external connections, consider designing unbound forms and possibly using something as SQL Server for your backend tables (and just link in the SQL Server tables to your frontend.) This situation works ideal versus replication.

For frontends, I have uses open the mdb via a small vb script I developed which allows me to have 100's of users in the same frontend since the script will clone off of a 'source' frontend mdb, add the user's login and then automatically launch the cloned copy. You can read more about it here:

http://www.dbforums.com/6274786-post19.htmlhttp://www.fmsinc.com/
 
Last edited:

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
There is no reason to avoid replication if you use it properly. Too often, though, people try to use it without understanding the basics. This is the source of most problems, i.e., trying to do a direct synch across an unreliable connection.

The only place I consider it essential these days is for laptops that need to edit data when in the field and disconnected from the Internet. In that case, you'd synch direct when back in the office and connected to the wired LAN, so direct would be perfectly safe. In that scenario, there's little that can go wrong and it works very smoothly.

With A2010 and Sharepoint 2010, there's a better way to support disconnected users, and so I'm going to be working on learning how to use A2010 to replace Jet replication.

But for now, Jet replication is a perfectly good solution to the disconnected users problem.

For all other scenarios, I'd agree that Windows Terminal Server is almost always a better solution. It's certainly far easier than re-engineering your app to use a server back end across the Internet.
 

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
There is no reason to avoid replication if you use it properly. Too often, though, people try to use it without understanding the basics. This is the source of most problems, i.e., trying to do a direct synch across an unreliable connection.

The only place I consider it essential these days is for laptops that need to edit data when in the field and disconnected from the Internet. In that case, you'd synch direct when back in the office and connected to the wired LAN, so direct would be perfectly safe. In that scenario, there's little that can go wrong and it works very smoothly.

With A2010 and Sharepoint 2010, there's a better way to support disconnected users, and so I'm going to be working on learning how to use A2010 to replace Jet replication.

But for now, Jet replication is a perfectly good solution to the disconnected users problem.

For all other scenarios, I'd agree that Windows Terminal Server is almost always a better solution. It's certainly far easier than re-engineering your app to use a server back end across the Internet.


I have to totally disagree regarding replication. EVEN if it's used properly! Being a developer since MSAccess version 1.0 (ie. 25+ years as an experienced developer), I learned everything about replication and used it for 2 years (as development and support for what originally started out developed by the University of Milwaukee). Orphaned records were a constant problem. Totals were rarely accurate (with about 100,000+ records.) This was to run the Midwest Renewable Energy Star program and was a complete disaster (and yes, I fully blame replication!) All the other Energy Star programs I designed in-house using SQL Server and Citrix were always accurate within less than 1% discrepancy (requirement by legislature) where all the db's contained 5+ million records of Energy Star data (if you ever get a Midwest Energy Star rebate check, it came from the non-replicated system I designed.)

But getting back to replication problems (and I'm recalling 10 years back when I used it)...
1. UNGODLY autonumber values such as -23432432334 which replication uses to do a 'hit and miss' to prevent duplication in a relational structure. A nightmare for any relational structure.
2. ANY loss of connection (or even the smallest hiccup) by an external user while updating data will lead to orphaned records.
3. ANY loss of connection (or even the smallest hiccup) by an external user while synchranizing code will cause problems with the 'nodes' and possibly the master.
4. Replication is a hog when it comes to trying to synchranize thousands of records (or code) and hiccups are often regardless of good connections. (I wouldn't even trust it to syncrhanize a dozen non-relational records.) Users with fast internet connections were constantly complaining about hiccups and inaccurate data during synchranizing. I was constantly fixing problems with destroyed replicated mdb's for users.
5. If the 'master' becomes corrupt in any way (which happens often), a restore of the NON-replicated mdb will need to be done and re-creation of the master and nodes.

It was actually the use of replication and the problems it presented that led to the downfall of the university of Milwaukee running the Midwest Renewable Energy Star program. I was then tasked to actually replace the replication schematic to develop a more reliable and accurate system for tracking the Renewable Energy program (since the error in totals stemming from the replication system were well above 10%).

I don't mean to criticize your comments dfenton but saying someone needs to simply understand the basics of replication or that it runs smoothly and non-problematic is a vast understatement. Even understanding the basics (and advanced features as I did), it's a disaster of a product and one of MSAccess's worst features.

I strongly, strongly discourage it's use for anything! There are much, much better ways (even for offline mdb use such as exporting/importing data which I did for offline mdb's.) I shutter at the thought that there are any critical data systems which might use replication.

Re-engineering an application to use a server backend (such as SQL Server) takes less than 5 minutes since all you need to do is use the MSAccess upsizing wizard to SQL Server which again, literally takes less than 5 minutes and has always worked flawlessly for me (I do it quite often.) Although I admit you should learn a few basic things about SQL Server but linking in SQL Server tables (versus linking in MSAccess tables) are easily done (the only difference is creating an ODBC connection for the SQL Server linked tables which the wizard will automatically do). There's no need to re-write/re-engineer any vba code to stored procedures, triggers, or views. I think a lot of developers misunderstand about SQL Server and think that all the vba coding must then need to be re-designed to use views or stored procedures which is not the case. Linked SQL Server tables works great without revising any vba coding (I won't get into all the benefits of using a server backend.)
 
Last edited:

boblarson

Smeghead
Local time
Today, 04:38
Joined
Jan 12, 2001
Messages
32,059
I'm going to ask this question of pkstormy then. How do you propose for a disconnected laptop user to be able to use the database and then get their data synched with the live backend?

(by the way, I used replication for 4 years using indirect updates for a tracking system between our system in Sacramento and Portland, because I was not allowed to use the SQL Server - our CEO refused to let me do so and it worked well. However, it was not a transaction based system and only had updates done very infrequently)

I don't see anything wrong with the negative number ID's. -2390210329 is no worse off then 1, 2, 3. And when David suggested the disconnected laptop issue, he FULLY STATED that the laptop would be LINKED to the network with a CABLE for doing the synchs. It was not to be using wireless, or any other method that would be cause for network disruption. Yes, if there is network disruption Access can die. This goes for ANY Access database, not just a replicated one. If the cable based network connection is dropping packets then they have a more serious issue at stake.
 

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
1. UNGODLY autonumber values such as -23432432334 which replication uses to do a 'hit and miss' to prevent duplication in a relational structure. A nightmare for any relational structure.

You mistake autonumber PKs for meaningful data. They are metadata, useful in the background, and if you care what the values are, you really don't understand their purpose.

So, this is *your* problem, not Jet replication's problem.

2. ANY loss of connection (or even the smallest hiccup) by an external user while updating data will lead to orphaned records.

Er, what? Do you mean a dropped connection during a direct synch? If so, then that's a problem with choosing an unreliable connection. If you mean regular editing, then that's a Jet problem, not a replication problem.

In any event, I've never seen an orphaned record in any replica set in the 13 years I've used Jet replication. I *have* seen errors/conflicts reported because of problems that led to referential integrity issues, but no data was ever corrupted and no data integrity was compromised. Yes, the problem has to be resolved in order for the data to get properly synched, but that's what you'd *want* to happen.

So, I think this one is complete malarkey, at least in the rather sketchy form in which the point is worded.

3. ANY loss of connection (or even the smallest hiccup) by an external user while synchranizing code will cause problems with the 'nodes' and possibly the master.

Synchronizing "code"? Who but those who don't know the rules of replication would synch anything other than pure Jet objects?

Further, this seems to me just a restatement of #2, i.e., using an unreliable connection for a direct synch can cause loss of data (or, at minimum, loss of replicability) if you have a dropped connection during the synch.

But that applies only to DIRECT SYNCHS, which are not recommended for any connection other than a wired LAN. There is no possibility of a loss of data or corruption if you use indirect or Internet synchronization.

So, if you tried ot use direct synchronization in unsuitable circumstances and lost data, then THAT IS YOUR FAULT -- you were doing it wrong by not following best practices for replication.

4. Replication is a hog when it comes to trying to synchranize thousands of records (or code) and hiccups are often regardless of good connections. (I wouldn't even trust it to syncrhanize a dozen non-relational records.) Users with fast internet connections were constantly complaining about hiccups and inaccurate data during synchranizing. I was constantly fixing problems with destroyed replicated mdb's for users.

An Internet connection is not a suitable connection for a direct synch. It is only suitable for an indirect or Internet synch, and in those latter two cases, it is impossible to lose or corrupt data.

So, again, this is YOUR ERROR. You were using the wrong synchronization methods.

5. If the 'master' becomes corrupt in any way (which happens often),

No, it doesn't happen often, unless you're being a fool and using your Design Master as part of your daily synchronization topology. The DM should be squirrelled away somewhere safe, never edited by users and should be synched only often enough to keep it from expiring (the default retention period for replicas created through the Access UI is 1000 days, through DAO code, 60 days).

I have recreated DMs, but never because it was corrupted during a synch. I've only done it to recover from unrecoverable replication errors (all of them that I can recall caused by my own mistakes, not by flaws in Jet replication).

a restore of the NON-replicated mdb will need to be done and re-creation of the master and nodes.

Recovering the DM does not require recreating the whole replica set. If you think that, then you really do not have any significant understanding of Jet replication.

It was actually the use of replication and the problems it presented that led to the downfall of the university of Milwaukee running the Midwest Renewable Energy Star program. I was then tasked to actually replace the replication schematic to develop a more reliable and accurate system for tracking the Renewable Energy program (since the error in totals stemming from the replication system were well above 10%).

I hate to be blunt, but Jet replication was not the cause of the problem -- it was the choice of a technology that was not understood by the people implementing it.

I don't mean to criticize your comments dfenton but saying someone needs to simply understand the basics of replication or that it runs smoothly and non-problematic is a vast understatement. Even understanding the basics (and advanced features as I did), it's a disaster of a product and one of MSAccess's worst features.

You have supplied nothing but examples of ignoring best practices and you're saying that the problems were inherent to Jet replication. They are not -- they are due to doing things wrong.

And best practices are easily figured out *if* you take the time to understand how Jet replication works. If you do that, you'd understand why direct replication is suitable only to wired LAN connections. You'd also understand that indirect/Internet replication can never corrupt replicas, because they never open the remote replica across the wire.

If you don't know these things, then you are a novice in regard to replication and shouldn't be making blanket assertions about whether it works or not. Everything you've described was YOUR OWN ERROR, not due to some inadequacies in Jet replication.

I strongly, strongly discourage it's use for anything!

I would also strongly discourage anyone as ignorant of Jet replication as you seem to be from using Jet replication.

However, anyone who is willing to take the time to learn and implement best practices should have very little problem setting up a reliable synchronization system.
 

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
Bob,

I didn't really want to get into this but since I started it with my comments about how bad
replication is (and you decided to throw in a few offensive comments about skills)...

1. Autonumbers - they ARE very meaningful data especially when you design relationships based on
an autonumber field (although they 'may' not always serve a purpose to the user, they do to a
dba.) If you think they are meaningless, you haven't been a dba. Again, the 'hit and miss'
method which replication uses on the autonumber field to prevent duplicated autonumbers is just
not good. I guess we have a big difference in opinion as I do see problems with autonumbers
values like -2343243552343 and a 'hit and miss' type method. Your comments about not
understanding the meaning of autonumbers is a bit excessive. Try explaining to the legislature
evaluation agency which qc'd all Energy Star type data why the autonumber value -2342324224323
is missing in tableX. They DID care about such massive autonumber values, especially when
relationships are established on them and you need to track down missing data.

2. Connections - 'Any' system, internal or externally connected will undoubtly have a hiccup.
Heck, I used to see hiccups on our network drive all the time when I worked at the city (not my
problem though). But what is my problem is when a network (or external) hiccup affects data.
When synching 100,000+ records with an external connection, I can't tell you how many times that
failed with replication. You say it's not a Replication issue and it's my fault or the user's
slow connection fault. I say it IS a replication issue. I can try to blame the user for having
a slow/unstable connection but that doesn't carry much weight when the user lives in the
boon-docks and can only get a slower dialup type service. Try telling the legislature
that the reason the Renewable Energy Star totals are not accurate is because some users have a
slow connection and the data didn't synch correctly/fully with replication!

3. Synchranizing code - Yes - the University of Milwaukee designed it so it synch the code which
only added fuel to the fire for corrupted mdbs. This was one of the FIRST things I stopped when
I inherited the design (again, I say INHERITED!)

4. I tried to keep this limited to replication and what problems I had but the "YOUR OWN ERROR",
"THAT IS YOUR FAULT" and other capitolized and NOVICE type comments as such go a little too far.
I tried not to get into any comments about your skillset level (which I know a little bit
about) and only stick with the replication issue which you seem to indicate were all MY problems
of not understanding replication and how it works (as well as a few other things.) I've been
designing databases and programs for over 25 years and have set up quite a few enterprise level
applications including the entire Midwest Energy Star database which consisted of all Energy
Star run programs. I won't get into all the other programs I designed for different companies
(as I'm sure you have also) but I do find such comments a bit offensive as I'm sure you would
also. As one of the moderators on the dbforums website, I came to this site to help others and
not be criticized. I don't mind being critiqued (or even told I'm wrong) but such type comments
you made are uncalled for.

I'm recalling replication issues I dealt with over 10 years ago since I haven't (nor would I)
ever touch it again regardless of how great you may say it is. At the time, I new quite a bit
about replication and could've gotten into more into details about the problems (and perhaps
used more specific examples and better terminology). I can't recall offhand though all the
detailed problematic issues I had with it but there were many. To me, losing data of
any kind is a big deal! Spending time fixing replication related issues when I can use
that time for development IS a big deal!


If you think replication is great, that's fine. You're probably one of the few which uses it
extensively. But I personally wouldn't recommend it to anyone and discourage it's use.

In regards to what was used in leu of replication after I took over the INHERITED
replication design...

Since the Midwest's Energy Star data consisted of recordsizes in the 5+ million, something
needed to be setup for our external slow connection users.

1. Since I had already setup the other Energy Star programs data on SQL Server, I put the
Renewable Energy Star program on SQL Server and simply linked them into MSAccess (although I did
design a few stored procedures and triggers for some which didn't have the SQL Server tables
directly linked into the frontend). I also designed a few vb/php scripts to udate certain
tables (outside of MSAccess) but I won't get into the details on how those were actually used.

2. As I did with the other Energy Star programs, I designed an MSAccess application using the
unbound form design technique (similar to some of the visual basic applications I had
designed). This produced fast reliable record writes/returns to the forms/tables and allowed
users with slow connections to return or write records (out of the 5+ million) within a second.
If a hiccup occured, it didn't affect data since it was only written to the tables after the
user clicked a 'save' button on the unbound form, hence, limiting the actual connection time to
the tables to less than a second (again, I used a few stored procedures and triggers to make
sure the data was saved cleanly). Our network guys setup a reliable citrix connection for users
to connect which worked very well, especially for some of the slowest dialup type connections.
Some users connected via terminal server. I did something similar for the Experian Credit Card
dataset we utilized which was also in the millions for recordset sizes.
This setup works ideal and I haven't lost any data since.

At the hospital where I work now, I continue to store all our data on SQL Server (which I
firmly believe should be used if the data is critical) and we use a combination of
citrix, terminal server, and vpn for users to connect. For some of the larger, more critical
patient type databases, I will still use the unbound forms technique in MSAccess.

Bob - you really need to work on some people skills with what you wrote. There was absolutely
no need for you to write some of the things you wrote. Sarcasm does not fit in well on a
help-site such as this. If you're stubborn enough to use replication for 13 years, I give you
credit for that. But if you're stubbon enough to not grow beyond that to learn other
techniquess, well.... There are so many better ways you could learn as discussed above unless
you're against improving your methods.

The good part is that you can say you're the only one (or one of the few) who has
successfully used replication on this forum.
 

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
Bob - My sincerest apologizes to you!! You are 100% correct.

I really, really feel like scum now and you have every right to be angry with me for directing the post at you. Now I know why something didn't seem right (I kept thinking, "why is this guy now pushing Replication so much when he's designed some really nice applications." (and I was sure we've crossed paths on the forums before.) Something seemed extremely odd since I'm familiar with a lot of your coding and none of it has any traces of replication. It also seemed odd that you were calling me a novice since again, you've always seemed more professional than that and I was sure we've crossed paths on one of the forums before (In some fairness to David, I just started answering posts on this site a few weeks ago so he doesn't know me.) Now it all makes sense.

Bob - You could've/should've replied a bit harsher with me for making such a big mistake. My gratitude for going easy. I don't like to make mistakes like this.

DAVID - I've never seen any of your coding so I can't really vouch any of your points on replication. You might want to look at other develper's designs to get some pointers. I've posted several coding snippets of my work which you can freely look at and critique as much as you like: http://www.dbforums.com/microsoft-access/1605962-dbforums-code-bank.html. If you want to post any of YOUR examples (not just screen snapshots - I went to your DFA website and read some things we could dispute on your selling points (which could tie up several threads on discussing some of what I consider flawed selling ideas). I didn't see any code examples though which I'd be happy to look at them also. You've probably never looked at any of mine (or others) examples so I can understand (to some point) your naivety. I stand by statement that you really need to work on your people skills (while I'll working on my reading skills as Bob suggested.)

Again, BOB, my sincerest, sincerest apologies!
 
Last edited:

dfenton

AWF VIP
Local time
Today, 07:38
Joined
May 22, 2007
Messages
469
To pkstormy:

I don't need to post any code. You can simply Google my name and you'll find literally thousands of posts in Access forums going back to 1996 that include lots of code.

You could also check my profile and go to my website and find code on my Access page.

And you might find the link to the Jet Replication Wiki:

http://dfenton.com/DFA/Replication

If you do any of those things, you'll find I'm no novice and that I know a thing or two about replication.

Now, to be fair, when I started with Jet replication, I didn't know what I needed to know and benefited greatly from the advice of people like Michael Kaplan. You are lucky he has moved on to other issues, as if you think my reply was harsh, you have no idea about harsh!

There is nothing in your reply that is anything other than just repeating your same set of errors and arguing that you have made no errors based on an argument from authority. You make completely wrong claims about assigning meaning to Autonumbers (you are in a distinct minority in claiming that they should have meaning), and you blame on replication the sensititivy to dropped connections when that problem is inherent to Jet precisely because it's a file-based database engine.

Thus, I can only conclude that you really don't understand the issues involved.

You admit that you didn't even try to fix your replication problems, and that your experience is around a decade old. I started with replication in 1997, made all the possible mistakes, and by 1998, had learned what to do and had multiple apps using Jet replication, one of which was used to allow one of my clients to share data between their New York and London offices. That app used indirect replication with twice-a-day synch, and never lost any data (once I got away from using incorrect methods!). Since that time, I've created and supported dozens of replicated apps and consulted with other developers on designing and maintaining their replication subsystems.

Jet replication still works if you use it properly.

If you don't, it breaks and you lose data.

But that is the case with any technology.

It is a poor workman who blames his tools, and I can't see how that doesn't apply to you since you are blaming replication for problems that were unquestionably caused by improperly designing your application.

Don't blame me if the truth seems rather harsh.
 

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
To pkstormy:

I don't need to post any code. You can simply Google my name and you'll find literally thousands of posts in Access forums going back to 1996 that include lots of code..

{edited - some original comments made that were non-professional}

What a freakin ego. I can also say that I have thousands of posts. That means nothing. ("show me the money"!) But I've also posted several, several coding snippets as well as full applications to help others.

You can google my name also and find it everywhere so that again, says nothing to me.

You're scary David because you're not willing to adapt and learn new things to make the process fit the environment. You 'blindly' recommend Replication. I don't consider you a novice at all (which is the scariest part.) I just consider you stubborn in your ways not willing to adapt or learn new things.

If you weren't harping so much on this replication process (which IS often problematic except for the "SIMPLIEST" of databases), I might agree with you on some concepts. But you still need to advance and learn a few things that deal with large recordsets and enterprise level applications. I would argue this point with anyone who tries replication on a large relational recordset regardless of who they 'may be'.

I really don't mean to 'cut' you in any way. I don't like to do this. But when you push something and say it's great which I know will eventually be problematic for the user, I have to say my peace.
 
Last edited:

pkstormy

Registered User.
Local time
Today, 06:38
Joined
Feb 11, 2008
Messages
64
David,

{edited - some original comments made that were non-professional}

I have in my signature that I started with Access 1.0. I don't expect that to mean anything to you or anyone else (before that, probably like you, I was designing dbase applications back in mid 80's). I could've designed a couple of MSAccess 1.0 (or 97) applications and left it at that and said I was 'experienced'.

To me, it's nothing to do with your skillset that bothers me (or what you've said about mine - well, maybe a little, I also have a bit of an ego as well.) What IS important to me is that the posters on this forum get correct guidance. I know you're doing your best to guide them as I am as well. We have a big difference in opinion on Replication. But I consider both of us experienced developers who continually need to grow regardless of how many years experience we may already have.

Again, I don't mean to be cruel. I just have to deal day in and day out with fixing other developers programs which sometimes leads to frustration on my end on why they didn't keep up-to-date on finding an easier method (look at some of my working examples)

Replication (like ADP) to me, is a dying (dead) technology which is being replaced by things such as .NET/SQL Server, and the like (although I've always been a SQL Server addict since 6.5.) I have not seen support by Microsoft to fix old type problems with replication (or ADP). They just re-emphasis a 'best practice' type step (of many) which is easily ignored by the developer and causes problems. Especially DATA type problems which I consider critical. Microsoft may still support new replication technology but the basic concept/method used is still the same.

Your 'best practices' step approach only re-emphasis that replication is not a simple process for someone who might be new to development. It is a dangerous tool in the hands of the inexperienced and someone else is going to have to eventually 'clean-up' the mess.

If you think I'm being unfair regarding replication, perhaps I am. It is NOT a widely used technique. Especially with new technology at hand. I have not seen another replicated mdb in the last 10 years (that doesn't mean though that I don't check to see if anything's changed with it.) AGAIN, that doesn't mean I don't keep up-to-date on it's use of autonumbers and 'hit/miss' methods to prevent duplication which is simply not good.

There comes a point though when troubleshooting time overcomes development time and data is critical (no - it's not a simple 10,000 recordset with 1 or 2 tables! - it's a 100,000+ recordset with multiple, multiple relationships that needed to be syncrhanized - you always forget this point!) It may work for you and simple db's but it doesn't work for enterprise level mdb's. I won't even mention the fact that this db needed to be linked with other in-house applications I developed (ranging in the 5+ million rec size) but that's a whole new point aside from the 'simple' replication process you so highly praise. To me, spending time on fixing a replication type issues when there are better developmental type things to focus on is simply ridiculous. Spending weeks of time fixing 'missing' data is not acceptable.

My point is, why even create a relational table structure for a replicated mdb when it easily breaks those rules due to a 'best practice' step not being followed. This is not good. Relational table structures should never be allowed to be 'broken', bypassing any referential integrity rules which replication allows to happen.

There's no other technique I've seen which allows you to so easily break the referential integrity rules. Having a 1/2 dozen 'best practice' type steps doesn't make something easier or recommended. It just makes it more suspectable and prone to errors a developer could make. When replication affects a relational db structure and the ease of losing data by not following a list of 'best practice' steps, I consider that critical. If I see ungodly autonumber values, that concerns me. If it doesn't you, again, you need to put yourself in a dba's shoes.

and YES - I DID follow all the 'best practices' of replication!

We have a BIG difference on the meaning of autonumbers. Again, these ARE important to dba's. Searching for missing record -22342342323 out of millions of records is not good. I didn't like it nor did the legislature evaluation agency.

Oh well. I guess we'll have more posts to come.

Post some working examples! I'm actually curious to see what you have.

Here again is the link to 7 pages of working examples by myself (and others): http://www.dbforums.com/microsoft-access/1605962-dbforums-code-bank.html
 
Last edited:
Status
Not open for further replies.

Users who are viewing this thread

Top Bottom