Solved Del All FK & PK Constraints (2 Viewers)

dalski

Active member
Local time
Today, 21:59
Joined
Jan 5, 2025
Messages
272
EDIT - On closing & reopening MSSQL the changes have taken effect fully. I can handle the 4 FK constraints manually; that's no big deal but would appreciate if someone can help in doing this for PK's?

After importing to MSSQL I want to convert certain FK's/ PK's data-types. So need to drop all FK constraints, then PK constraints in db where different schemas exist, to allow editing data-types of the PK/FK's then I'll recreate the relationships manually again. New to MSSQL so advanced Stored Procedures & dynamic SQL so bit much atm.

Few threads out there relating to FK's, Below (post-3 [answered Oct 4, 2012 at 7:45 by Yaroslav]) executes.
Code:
DECLARE @SQL VARCHAR(MAX)=''
SELECT @SQL = @SQL + 'ALTER TABLE ' + QUOTENAME(FK.TABLE_SCHEMA) + '.' + QUOTENAME(FK.TABLE_NAME) + ' DROP CONSTRAINT [' + RTRIM(C.CONSTRAINT_NAME) +'];' + CHAR(13)
--SELECT K_Table = FK.TABLE_NAME, FK_Column = CU.COLUMN_NAME, PK_Table = PK.TABLE_NAME, PK_Column = PT.COLUMN_NAME, Constraint_Name = C.CONSTRAINT_NAME
  FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C
 INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK
    ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME
 INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK
    ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME
 INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU
    ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME
 INNER JOIN (
            SELECT i1.TABLE_NAME, i2.COLUMN_NAME
              FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1
             INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2
                ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME
            WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'
           ) PT
    ON PT.TABLE_NAME = PK.TABLE_NAME

EXEC (@SQL)

PRINT @SQL

Originally had 56 relationships, now showing 4 relationships when I run below script. Yet if I go into a Table Design right-click field > Relationships the existing relationships (which are not listed in the below 4 outstanding) exist. Despite refreshing the entire db in Object Explorer. Generating a new Diagram also show the relationships still existing.
Code:
SELECT  *
FROM    INFORMATION_SCHEMA.CONSTRAINT_TABLE_USAGE
WHERE   CONSTRAINT_NAME IN ( SELECT CONSTRAINT_NAME
                             FROM   INFORMATION_SCHEMA.TABLE_CONSTRAINTS
                             WHERE  CONSTRAINT_TYPE = 'FOREIGN KEY' )
SELECT  *
FROM    INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE
WHERE   CONSTRAINT_NAME IN ( SELECT CONSTRAINT_NAME
                             FROM   INFORMATION_SCHEMA.TABLE_CONSTRAINTS
                             WHERE  CONSTRAINT_TYPE = 'FOREIGN KEY' )
SELECT  *
FROM    INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE   CONSTRAINT_TYPE = 'FOREIGN KEY'

If we're able to get the above working be most grateful if someone could alter code to do PK's also. Hopefully once I've drafted a few Stored Procedures... spent a bit more time in MSSQL it'll become apparent.
 
Last edited:
I too am curious about the need to convert the datatypes of fields involved in Foreign Key Constraints.
 
Int > TintyInt/ SmallInt/ BigInt... Seems no point in wasting space with some of these. I don't need to waste space generically assigning these data types with the PK's/ FK's to int. Don't get me wrong some will be, but I'm going through them now determining what sizes I reasonably need for these. No point assigning an int PK when there will be < 5 records & the FK is used all over the place (don't worry my normalization is pretty good).
 
Int > TintyInt/ SmallInt/ BigInt... Seems no point in wasting space with some of these. I don't need to waste space generically assigning these data types with the PK's/ FK's to int. Don't get me wrong some will be, but I'm going through them now determining what sizes I reasonably need for these. No point assigning an int PK when there will be < 5 values & the FK is used all over the place (don't worry my normalization is pretty good).
Oh, in that case, I truly believe you are wasting time and effort to save some space that doesn't need to be saved. I can definitely see doing something like that in a green field database where you don't have to manage existing architecture and data and the motivation of saving spaces is pertinent, though. It wouldn't cost a lot to architect and implement.

I will defer to anyone who makes a strong case that using SmallInt or BigInt saves sufficient storage space in a SQL Server to justify this exercise, but this is not the place I'd invest my billable hours for a client, or my salary for an employer.
 
It saves space when you have a million records. In the specific case that you have a million records and switch from a field size of 32 to 16 bits, you save 2 MB on that table on a machine whose physical memory (these days) will be measured in multiple GB. If you are on a system with a memory constraint that severe, the LAST thing you want to do is diddle with data sizes. If you had multiple fields being shrunk, you might get 15 to 20 MB back. On a modern machine of, say 8 GB, that 20 MB savings is about 0.25% of memory. If you are THAT tight that you needed it, the software effort would cost more than a couple of larger memory boards or a larger disk.
 
Thanks all for your honesty. Thinking of network performance with Azure, doing a very rough calculation (no doubt wrong :p) I have around 108,000 records * 6 average sub-records = 648,000 sub-records in this one area alone. relating to one container. I won't go into detail spilling out my application. On my very first calculation which is on the very first small item going from an Int to TintyInt is a 1.944 MB saving. Viewed thousands of times throughout this relevant grandparent container. Then compound that with circa half a dozen/ a dozen e.g's like this which are more nested than this example & several users accessing remotely. If I'm wrong I'm wrong & that's fine, appreciate I'm not considering key-compression/ caching I think it's quite a complex schema/ project; abnormal to most normalization rules; where historical records are necessary & not considered redundant data.

I thought/ still think there would be a notice in network performance & the db performance overall. Not getting defensive, in the interest of learning:
  1. Reduce packet size (1,500 bytes; we're well in that territory I believe), benefitting a faster db/ reducing likely errors/ drops/ timeouts; especially in conjunction with Azure.
  2. Reduced costs in Azure transferral sizes as db scales.
  3. Increased performance sending less data over the wire.

Also in just a couple of cases here & contrary to the topic of reducing size; the additional circa 4.3 billion factor additional records available with BigInt compared to default key size int. Where I certainly wish to enjoy that benefit. This relates to 2 fields, opposed to the 50 or so others. So hopefully the offset in the prior beat the latter.

1764236986560.png
 
Last edited:
TinyInt has a max value of 255 - so useless in a table of 108000 records!

Beware also, sometimes you can not join between fields of different datatypes (perhaps this is an Access limitation - I'm not 100% sure)

I think any savings will be negligible
 
Thanks Dave, ok I'm greatly outvoted here & I take onboard a unanimous vote from the experts with much greater experience than me. Thank you all for your input, greatly appreciated.
 
Last edited:
I suggest your PK's and FK's be SQL Server Integer datatypes (-2,147,483,648 to 2,147,483,647) for compatibility between SQL Server and Access LongInt data types.
Are you also aware that SQL Server PK's using the IDENTITY Property are not assigned until the new record has been committed to the table?
 
TinyInt has a max value of 255 - so useless in a table of 108000 records!

Beware also, sometimes you can not join between fields of different datatypes (perhaps this is an Access limitation - I'm not 100% sure)

I think any savings will be negligible
I'm sure that you can't create relationships on fields of different datatypes.

I just successfully created a query joining two tables on a tinyint field and a smallint field, though.

It made no sense logically in this test scenario, but SQL Server didn't object, since both as integers.

That would not be the case with other datatypes, of course, e.g. Dates and Chars. At least not without typecasting one or the other.

I'm getting away from the topic of the discussion, so I'll just stop there.
 

Users who are viewing this thread

Back
Top Bottom