How do you sync pre-aggregated table in Postgresql? (1 Viewer)

EliiotM

New member
Local time
Today, 17:02
Joined
Jan 24, 2022
Messages
9
I have a problem with aggregating data in one table. Without caching it takes around a minute, so I've decided
So everything seems to be going great, but I have a small question. Our backend is inserting data to the main table, lets call it "A". My pre-aggregated table "A1" is being created by cron job and is populated in midnight. But what about after? Do i just insert the data twice into two tables?
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 10:02
Joined
Feb 28, 2001
Messages
27,003
Hello, EliiotM, ... I can't answer your question, but making by this post, I will move your question back up to the top for someone to take a shot at answering for you. I'm going to guess that we don't have a LOT of Postgresql folks here.
 

sonic8

AWF VIP
Local time
Today, 16:02
Joined
Oct 27, 2015
Messages
998
Do i just insert the data twice into two tables?
If the data in table A1 is aggregated, just inserting a new record will probably not produce the desired results.

There are two basic options.
1. Make it clear to the users that the aggregated data may be up to 24hrs out-of-date.
2. Whenever a records is modified in the main table A, rebuild the aggregated A1 table. Of course, if this happens frequently, it will invalidate the purpose of the aggregated table.

There may be additional options specific to PostgreSQL, which I do not know. A simple Google search shows several results, which may be of interest.
 

Pat Hartman

Super Moderator
Staff member
Local time
Today, 11:02
Joined
Feb 19, 2002
Messages
42,985
When you have files that need to be downloaded every day, the quickest method is to use truncate to get rid of the old data and then an append query to transfer the downloaded file to the table.

Do not try to update rows. Just replace the entire table.
 

Isaac

Lifelong Learner
Local time
Today, 08:02
Joined
Mar 14, 2017
Messages
8,738
in SQL Server, there are cases where an "Upsert" is more efficient, but it all depends on the AMOUNT of data - frankly, I've never had a good case for it, but there are those who deal with large data in SQL Server that swear by the usefulness of the MERGE statement.

Maybe your RDBMS has something like that.
 

Minty

AWF VIP
Local time
Today, 15:02
Joined
Jul 26, 2013
Messages
10,355
I have found SQL Merge to be very useful in maintaining data lists that change frequently, but as you say it's quite an unusual requirement.
 

jdraw

Super Moderator
Staff member
Local time
Today, 11:02
Joined
Jan 23, 2006
Messages
15,364
EliiotM,

As Doc said there aren't too many Postgresql people here. Others have provided some advice, but I'd like to hear a little more about the requirement and the timing of the data and the "refresh/update" of the database for users.

We had a set up where we had
-a corporate database
-various point-in-time databases (I think these relate to your aggregated data)
-some local, custom "personal" databases.

The point-in-time were "aggregations" for Month-end/quarter-end/ year-end type things.
These were derived from corporate and structured/frozen as ReadOnly.
These offered consistent data for analysis/reporting.

Good luck with your project.
 

Users who are viewing this thread

Top Bottom