Linking two similar tables with single queries

samcooldude1430

Registered User.
Local time
Today, 05:15
Joined
Sep 11, 2016
Messages
13
Dear Expert,

I have three identical tables for different months (June,August September) . I want to run a single query so that data can be pulled for all three months together.

Please guide me for the same.

Thanks
 
Working:- :)
Now next step is:
BBH KPI Database_June'16 and BBH KPI Databese_Aug'16 are my two tables and peirod_start_time, bts_name and tch_comp_rate are my columns.

My queries goes as follows:
SELECT [BBH KPI Database_June'16].period_start_time, [BBH KPI Database_June'16].bts_name, [BBH KPI Database_June'16].tch_comp_rate
FROM [BBH KPI Database_June'16]
WHERE ((([BBH KPI Database_June'16].bts_name)=[Enter BTS Name:]))
UNION ALL SELECT [BBH KPI Database_Aug'16].period_start_time, [BBH KPI Database_Aug'16].bts_name, [BBH KPI Database_Aug'16].tch_comp_rate
FROM [BBH KPI Database_Aug'16]
WHERE ((([BBH KPI Database_Aug'16].bts_name)=[Enter BTS Name:]));

When I run the query it ask me for BTS Name and displays the 2 month data.
Now I want to query for specified dates what changes I have to make in the above query? :banghead:
 
I would scratch the WHERE clauses from that query entirely. I would do the UNION in one database query, save it and give it a name--let's say MainQuery. Then build another query using MainQuery as its source. Then in that second query is where I would apply criteria.
 
I have three identical tables for different months (June,August September) .
This is a design flaw. Time (the date/time of an event) is a single discrete dimension of an object just like inches or dollars or pounds, and it belongs in a field in the table that describes the object.

Making time a part of the structure of your data makes your life harder, because now you have to knit this data back into a single table using UNION queries, which is a total make-work project.

Hope this helps,
 
This is a design flaw. Time (the date/time of an event) is a single discrete dimension of an object just like inches or dollars or pounds, and it belongs in a field in the table that describes the object.

Making time a part of the structure of your data makes your life harder, because now you have to knit this data back into a single table using UNION queries, which is a total make-work project.

Hope this helps,

Got your point, but if I am making a single table, the rows exceeds more 0.5 million and needs to keep on growing.
Now after 0.5 lakhs when I am trying to add extra rows, I am getting error like "cannot open database ". It may not be a database that your application recognizes, or the file may be corrupt" and the access does not take more rows.
So I decided to split the table into monthly structure to avoid such error.
Hope I make sense! Please let me know if any other way.

Thanks
 
I would scratch the WHERE clauses from that query entirely. I would do the UNION in one database query, save it and give it a name--let's say MainQuery. Then build another query using MainQuery as its source. Then in that second query is where I would apply criteria.

Yes got it. Thanks. I have skipped the where part and saved it as a main query
Also please suggest what changes I have to make so this main query gets sorted in ascending order. I mean datewise.

Thanks..
 
500,000 rows in a single table is fine.

A UNION query that joins 500,000+ rows from multiple tables will be much less efficient than a single table.
 
I have had tables with several hundred thousand entries before. The question becomes one of structure. How many fields are involved here and what (if any) keys are involved? If you under-allocate key size, you could limit yourself pretty badly, and a table-scan of 0.5 Mil records will not be fast no matter WHAT the key size.

Let me try this another way. If you have three tables that each fit correctly and your UNION query works to do the right thing, then the recordset off the UNION query is the same recordset you would get from one table with all three months in it. If one won't work, why would the other? I think you abandoned the merging of your tables too soon, though I believe you might end up having to export your tables to another format and then re-import them.

To me, this problem speaks of some yet undiscovered underlying flaw in the design that is imposing your size constraint. I don't know that a simple query could ever cause a database to reliably become corrupted (SQL isn't supposed to be able to do that) no matter whether it was a SELECT, UPDATE, INSERT INTO, or DELETE, as long as you aren't hosing the MSys-... (system) tables. Therefore I am suggesting that your problem was a system glitch or a procedural error during your data operation on the big table.

Give us some info on the table: How many fields, what kinds of fields, how many records are you expecting, etc. When helping others with databases, the biggest problem is that we can't help if we don't understand your goal. Stated another way, how can we give you directions if we don't know where you are going?
 
what are the best practices for creating a database access?

Actually my access starts behaving abnormally when filesize reaches 20 GB. :confused:
 
I have had tables with several hundred thousand entries before. The question becomes one of structure. How many fields are involved here and what (if any) keys are involved? If you under-allocate key size, you could limit yourself pretty badly, and a table-scan of 0.5 Mil records will not be fast no matter WHAT the key size.

Let me try this another way. If you have three tables that each fit correctly and your UNION query works to do the right thing, then the recordset off the UNION query is the same recordset you would get from one table with all three months in it. If one won't work, why would the other? I think you abandoned the merging of your tables too soon, though I believe you might end up having to export your tables to another format and then re-import them.

To me, this problem speaks of some yet undiscovered underlying flaw in the design that is imposing your size constraint. I don't know that a simple query could ever cause a database to reliably become corrupted (SQL isn't supposed to be able to do that) no matter whether it was a SELECT, UPDATE, INSERT INTO, or DELETE, as long as you aren't hosing the MSys-... (system) tables. Therefore I am suggesting that your problem was a system glitch or a procedural error during your data operation on the big table.

Give us some info on the table: How many fields, what kinds of fields, how many records are you expecting, etc. When helping others with databases, the biggest problem is that we can't help if we don't understand your goal. Stated another way, how can we give you directions if we don't know where you are going?

Ok, I am sharing the details:-
There are 254 fields, and I have to create table of around 5 million rows.

Now I have to make daily trend (graph) with specific dates for around 30 fields and requirement may change dynamically for the trend.

I have monthly excel .xlsx format which I need to import into access but it is returning error. So I am copying and pasting the excel data into the field for each day manually. After 0.3 /0.4 mil insertion of rows the access behave abnormally returning corrupt messages and all.

Hope my requirement is clear. Thanks
 
There are 254 fields

You need to structure your tables properly. 254 fields is incorrect. Your multiple tables with the same structure is incorrect.

You need to determine the proper structure for your database, create it and then move your data to that structure. Read up on normalization (https://en.wikipedia.org/wiki/Database_normalization) that is what that process is called. Work through a few tutorials, give it a shot on your data, set up your tables in Access's relationship tool and post a screenshot of it back here and we can help you get the right structure.
 
You need to structure your tables properly. 254 fields is incorrect. Your multiple tables with the same structure is incorrect.

You need to determine the proper structure for your database, create it and then move your data to that structure. Read up on normalization that is what that process is called. Work through a few tutorials, give it a shot on your data, set up your tables in Access's relationship tool and post a screenshot of it back here and we can help you get the right structure.

I haven't got the right to post pic. However writing the fields:-

period_start_time
bsc_int_id
bsc_name
bcf_name
bts_int_id
bts_name
bts_id
seg_id
la_id_lac
cell_id
total_call_trf_6
traffic_area_trf_1
traffic_trf_12b
half_rate_traffic_trf_102
amr_full_rate
amr_half_rate
average_dhr_traffic
sdcch_comp_rate
tch_comp_rate
handover_success_rate
sdcch_assignment
tch_assignment
rx_qual_dl
rx_qual_ul
random_acc_succ_rate
ave_busy_tch
res_av_denom14
ave_tch_busy_half
sdcch_comp_rate_nom
sdcch_comp_rate_denom
tch_comp_rate_nom
tch_comp_rate_denom
handover_success_rate_nom
handover_success_rate_denom
sdcch_assignment_nom
sdcch_assignment_denom
tch_assignment_nom
tch_assignment_denom
rx_qual_dl_nom
rx_qual_dl_denom
rx_qual_ul_nom
rx_qual_ul_denom
random_acc_succ_rate_nom
random_acc_succ_rate_denom
sdcch_radio_fail
sdcch_rf_old_ho
sdcch_user_act
sdcch_bcsu_reset
sdcch_netw_act
sdcch_bts_fail
sdcch_lapd_fail
sdcch_a_if_fail_call
sdcch_a_if_fail_old
sdcch_assign
sdcch_ho_seiz
sdcch_abis_fail_call
sdcch_abis_fail_old
drop_after_tch_assign
tch_re_est_release
tch_new_call_assign
msc_i_tch_tch
bsc_i_tch_tch
bsc_o_tch_tch
msc_o_tch_tch
bsc_o_succ_ho
cell_succ_ho
msc_o_succ_ho
msc_o_ho_cmd
bsc_o_ho_cmd_assgn
bts_ho_assgn
sdcch_busy_att
sdcch_seiz_att
tch_seiz_due_sdcch_con
tch_call_req
tch_norm_seiz
tch_succ_seiz_for_dir_acc
bsc_o_sdcch_tch
cell_sdcch_tch
msc_o_sdcch_tch
a_if_crc_mismatch_call_setup
freq_dl_qual0
freq_dl_qual1
freq_dl_qual2
freq_dl_qual3
freq_dl_qual4
freq_dl_qual5
freq_dl_qual6
freq_dl_qual7
freq_ul_qual0
freq_ul_qual1
freq_ul_qual2
freq_ul_qual3
freq_ul_qual4
freq_ul_qual5
freq_ul_qual6
freq_ul_qual7
ghost_ccch_res
rej_seiz_att_due_dist
bcsu_overload_lower_limit
bcsu_overload_upper_limit
bcsu_overload_deleted_rach
packet_ch_req
ch_req_msg_rec
erlang_drop_cell
no_sdcch
tch_cong_time_cngt1
sdcch_cong_time_cngt2
traffic_nom_trf_12b
traffic_denom_trf_12b
no_sdcch_nom
no_sdcch_denom
tch_rej_due_req_ch_a_if_crc
tch_radio_fail
tch_rf_old_ho
tch_abis_fail_call
tch_abis_fail_old
tch_a_if_fail_call
tch_a_if_fail_old
tch_tr_fail
tch_tr_fail_old
erlang_drop_nom
erlang_drop_denom
tch_lapd_fail
tch_bts_fail
tch_user_act
tch_bcsu_reset
tch_netw_act
tch_act_fail_call
bsc_o_tch_tch_at
bsc_o_sdcch_tch_at
bsc_o_sdcch_at
bsc_i_sdcch_tch
bsc_i_tch_tch_at
bsc_i_sdcch_tch_at
bsc_i_sdcch_at
bsc_i_succ_ho
bsc_i_unsucc_a_int_circ_type
cell_tch_tch_at
cell_sdcch_tch_at
cell_sdcch_at
msc_i_sdcch_tch
msc_i_tch_tch_at
msc_i_sdcch_tch_at
msc_i_sdcch_at
msc_i_succ_ho
msc_o_tch_tch_at
msc_o_sdcch_tch_at
msc_o_sdcch_at
msc_controlled_in_ho
ho_unsucc_a_int_circ_type
tch_ho_assign
tch_norm_release
tch_ho_release
ave_sdcch_sub
ave_non_avail_sdcch
res_av_denom3
tch_cong_time
sdcch_cong_time
ms_tch_succ_seiz_assign_cmplt
ave_avail_tch_sum
ave_avail_tch_den
trx_type
period_duration
spare057044
tsqigood_nom
tsqiaccept_nom
tsqibad_nom
freq_dl_all
tch_pri_subscr_req
tch_pri_subscr_succ_seiz
que_all_ass_req_att
ava_21a
ava_17a
spare057046
msc_o_call_clr
bsc_o_call_clr
cell_call_clr
ava_34a
succ_released_ctm_call
cell_fail_lack
msc_o_fail_lack
bsc_o_fail_lack
900 Band
1800 Band
Total TRX Count
dhr_multiplexing_attempts
dhr_multiplx_fail_due_tch_res
csdap_res_alloc_att_for_dhr
dhr_mplx_fail_due_csdap_res
dhr_mplx_fail_due_other
ave_busy_dhr_tch
ave_busy_dhr_tch_denom
ho_from_amr_hr_to_dhr_succ
ho_attempt_from_amr_hr_to_dhr
ho_from_amr_fr_to_dhr_succ
ho_attempt_from_amr_fr_to_dhr
total_dhr_multiplexing_failure_rate
dhr_multiplexing_failure_rate_due_to_lack_of_suitable_pair
dhr_multiplexing_failure_rate_due_to_csdap
dhr_multiplexing_failure_rate_due_to_other_reasons
amr_hr_to_dhr_ho_success_rate
amr_fr_to_dhr_ho_success_rate
flag
tch_req_rej_due_abis_cong
call_setup_rej_due_abis_cong
gprs_terr_ug_rej_due_abis
tch_req_rej_due_etp_overload
call_setup_rej_due_etp_over
cssr_trai
tch_congestion_trai
time_of_amr_wb_calls
tch_requests_for_amr_wb
tch_seizures_for_amr_wb
tch_ended_transc_fail_codec_5
succ_change_amr_to_amr_wb
succ_change_amr_wb_to_amr
nbr_of_ul_fer_cl_gsm_hr_percent
nbr_of_ul_fer_cl_gsm_fr_percent
nbr_of_ul_fer_cl_efr_percent
nbr_of_ul_fer_cl_amr_hr_percent
nbr_of_ul_fer_cl_amr_fr_percent
nbr_of_ul_fer_cl_wb_amr_percent
nbr_of_ul_fer_cl_osc_dhr_percent
nbr_of_ul_fer_cl_osc_fr_percent
ava_1g
ava_1g_nom
ava_1g_denom
final_rtsl_down_due_to_flexi_auto_trx
ava_1g_denom_considering_auto_trx_shutdown
ava_1g_considering_auto_trx_shutdown
band_900_call_trf_6
band_900_traffic_trf_1
band_1800_call_trf_6
band_1800_traffic_trf_1
cssr_nps
tch_req_rej_due_to_tx_power
power_balancing_ho_requested
power_balancing_ho_req_rej
gprs_ter_ug_rej_due_lack_pow
tch_req_rej_lack
ave_used_tx_power
peak_used_tx_power
res_av_denom1
average_used_tx_power
band_900_tch_comp_rate
band_900_handover_success_rate
band_900_rx_qual_dl
band_900_rx_qual_ul
band_1800_tch_comp_rate
band_1800_handover_success_rate
band_1800_rx_qual_dl
band_1800_rx_qual_ul
band_1800_bts_name
band_900_bts_name
bcch_downtime
bcch_uptime


Also let me know how to set up Access's relationship tool. Thanks for help
 
Yeah really need to read up on normalization. Even without knowing what those fields represent I am confident you haven't done it correctly. All that data should not go into just one table. You are going to need a few to hold it properly.

The big issues I see is that you are storing values in names. This goes for both tables (e.g. BBH KPI Database_June'16) and fields (e.g. nbr_of_ul_fer_cl_gsm_hr_percent). Your table should be called [BBH KPI Database] and it should have a field which stores the June'16 part. That way you can put all your data in one and know what period each record is for by looking at a value in a field.

Also, when you store values in field names, its a sign you need a new table. While I have no idea what [nbr_of_ul_fer_cl_gsm_hr_percent] and [nbr_of_ul_fer_cl_efr_percent] are for, I do know that the 'gsm_hr' and 'efr' portions are values. Suppose I wanted to store grade percentages for various classes. I would not have a table like this:

Student, SciencePercent, MathPercent, HistoryPercent...
Steve, 88, 91, 90, ...
Sally, 78, 83, 60, ...

I would instead store it like this:

Student, Class, GradePercent
Steve, Science, 88
Steve, Math, 91
Steve, History, 90
Sally, Science, 78
Sally, Math, 83
Sally, History, 60

Tables should grow horiztonally (with more rows), not vertically (with more fields). Again, read up on normazliation and create the proper structure for your data.
 
Yeah really need to read up on normalization. Even without knowing what those fields represent I am confident you haven't done it correctly. All that data should not go into just one table. You are going to need a few to hold it properly.

The big issues I see is that you are storing values in names. This goes for both tables (e.g. BBH KPI Database_June'16) and fields (e.g. nbr_of_ul_fer_cl_gsm_hr_percent). Your table should be called [BBH KPI Database] and it should have a field which stores the June'16 part. That way you can put all your data in one and know what period each record is for by looking at a value in a field.

Also, when you store values in field names, its a sign you need a new table. While I have no idea what [nbr_of_ul_fer_cl_gsm_hr_percent] and [nbr_of_ul_fer_cl_efr_percent] are for, I do know that the 'gsm_hr' and 'efr' portions are values. Suppose I wanted to store grade percentages for various classes. I would not have a table like this:

Student, SciencePercent, MathPercent, HistoryPercent...
Steve, 88, 91, 90, ...
Sally, 78, 83, 60, ...

I would instead store it like this:

Student, Class, GradePercent
Steve, Science, 88
Steve, Math, 91
Steve, History, 90
Sally, Science, 78
Sally, Math, 83
Sally, History, 60

Tables should grow horiztonally (with more rows), not vertically (with more fields). Again, read up on normazliation and create the proper structure for your data.

I got your objective, yes the table can be normalized, but these are technical data where table structure cannot be altered much. The field will be constant for next three years without change. Thanks for your advice.

Moreover I have successfully created table and made union too. I created two union query with some fields selected. Now I want to query from these union. I wrote my sql query as

SELECT [Union_part 2].period_start_time, [Union_part 2].bts_name, [Union_part 2].sdcch_comp_rate
FROM [Union_part 2]
WHERE ((([Union_part 2].bts_name)=[Enter BTS Name:]));

which is working fine. But when I want to query fields from two unions simultaneously, I am using the below code

SELECT [Union_part 1].period_start_time, [Union_part 1].bts_name, [Union_part 1].bcch_uptime, [Union_part 2].tch_norm_seiz
FROM [Union_part 1], [Union_part 2]
WHERE ((([Union_part 1].bts_name)=[Enter BTS Name:]));

But in this case, I am getting the data infinitely for selected field bts_name.

What should be the appropriate query in this case?

Help please
 
I don't know what you are trying to achieve with that query so I don't know what the appropriate query should be. But I do know what you did.

Code:
FROM [Union_part 1], [Union_part 2]

Not to get too set theory on you, but with that code, you've created a cartesian product (https://en.wikipedia.org/wiki/Cartesian_product). You've essentially lined up every record in Union_part 1 with Union_part 2 to make a final query with a total size equal to the product of the records in each query. [RowsInFinalQuery] = [RowsInQuery1] x [RowsInQuery2]

Usually you only have 1 data source in the FROM clause and then the other datasources are connected via JOIN clauses (LEFT, RIGHT, INNER). You put 2 in the FROM and did this weird set multiplication thing.

Now back to me trying to get you to structure this thing correctly:

Just because you receive data in a certain manner doesn't mean you have to build this horrible rube goldberg query system to try (and I stress try, because you are not there) to get your data out of it. Life would be much simpler if you structured this thing correctly. A couple hours of learning and then a few more to build your process to get this data into the correct structure will absolutely pay huge dividends when you are looking at a 3 year life of this system.
 
I don't know what you are trying to achieve with that query so I don't know what the appropriate query should be. But I do know what you did.

Code:
FROM [Union_part 1], [Union_part 2]

Not to get too set theory on you, but with that code, you've created a cartesian product. You've essentially lined up every record in Union_part 1 with Union_part 2 to make a final query with a total size equal to the product of the records in each query. [RowsInFinalQuery] = [RowsInQuery1] x [RowsInQuery2]

Usually you only have 1 data source in the FROM clause and then the other datasources are connected via JOIN clauses (LEFT, RIGHT, INNER). You put 2 in the FROM and did this weird set multiplication thing.

Now back to me trying to get you to structure this thing correctly:

Just because you receive data in a certain manner doesn't mean you have to build this horrible rube goldberg query system to try (and I stress try, because you are not there) to get your data out of it. Life would be much simpler if you structured this thing correctly. A couple hours of learning and then a few more to build your process to get this data into the correct structure will absolutely pay huge dividends when you are looking at a 3 year life of this system.

Thanks Dear, I am a novice and tends to do mistake.
My query is that I have made two unions with each having few common and few different fields. I mean Union 1 have x,y,z,a,b,c fields and Union 2 have a,b,c,d,e,f fields.

Now, I want to query fields from both the unions using a single query. I mean suppose from union 1; I want x,y field and from union 2; e,f field. What would be the procedure to do that?
Hope I am not making life difficult for you :( Thanks
 
You should like your data sources appropriately and then bring in the fields you want from each datasource.

You're not making my life difficult, you're making yours difficult. I can drop out of this at anytime.
 
You should like your data sources appropriately and then bring in the fields you want from each datasource.

You're not making my life difficult, you're making yours difficult. I can drop out of this at anytime.

Thanks for your time. Yeah I know I am asking silly questions and you are getting offended. Apologies for the same. :(

My query is that I have these two tables.
http://screencast.com/t/thuVPPl7d.

Now I want to make a query to get all the selected fields i.e bcch_uptime from Union_part 1 and tch_norm_seiz from Union_part 2.

Initially I was doing it worng as you guided me. But now I am chaging the query as http://screencast.com/t/rUAvdHWWmVl but it shows me syntax error.

Please hlep me out how do I put my query?
 
If your not going to read up on normalization I suggest reading up on SELECT queries:
http://www.sqlcourse2.com/intro2.html

You only get to have one FROM clause. The way you include more than one datasource in a query is via JOINs (LEFT, RIGHT, INNER). Those are covered in the tutorial link I just posted as well.
 

Users who are viewing this thread

Back
Top Bottom