This is an "Attributes" discussion. There are two ways to handle this.
The simpler way is to have a BOOLEAN data type for each yes/no attribute and store that in a single record. One record, several fields.
The more complex way would be to make a list of possible attributes with code numbers, then make a parent/child table that is sparsely populated. Any flag would be TRUE if a child record exists for it. You make it false by deleting the child record for that attribute. The child table MIGHT be as simple as two integers, the parent record number as a foreign key and the code number (from a translation table, for example) as a foreign key. That is the ultimate and pure JUNCTION table.
But let's look at normalization (your original comment). If you have 30 yes/no attributes for each primary thing you are tracking, the question is whether you need to "normalize" the 30 fields tha hold your 30 yes/no values. This does not matter whether we are talking 30 byte-integers, 30 yes/no fields, or 30 "packed" bit codes using binary AND, OR, and XOR operators, it is the same question.
I think that in this case, I would vote for the inclusion of 30 yes/no fields in the single record rather than the multiple child records case. Here, you have to ask the question about whether you would be violating the rule about "the values in the record must be uniquely relevant to the entity selected by the primary key." I don't think this would be such a violation.
Trying to separate out the attributes would be possible, I suppose, but let's put it in a different perspective. Suppose this was a simple address database where you stored first and last names separately. If you had two people named Smith, would you want to have a table of possible last names and store the index to the correct name? Or if you had two guys named Richard. Would you store an index to the correct first name? No, in both cases you would store the literal value, ignoring the potential for duplicated value in one or both name fields.
Therefore, to my considered opinion, you probably should not bother to try to normalize that aspect of your records any farther. From a practical viewpoint, you can save space by compacting the field into a binary item. Because of the overhead of JUNCTION tables even when you make them SPARSE, it is almost unthinkable that you would save much space by normalizing when compared to creating the JUNCTION tables.
I know this went far afield, but I was attempting to address your issues to normalize or not to normalize... THAT was the question.