This reminds me of a recent happening on Twitter a week ago. A very brilliant lady accused a popular tech figure of owning a burner account that was fond of tweeting depraved thoughts. How she got to that conclusion? Logic. She observed the style of writing adopted on the burner account and related it to that of the accused. To further make this claim sensible to her, she fed his tweets to ChatGPT alongside that of the burner account. And the output was, of course, 90% same person.
Now, a deeper dig into some of the old tweets of the burner account proved that logic to be incoherent, but she held firmly to the belief that both herself and the AI's evaluation of both accounts couldn't be wrong because she had trained it to give accurate outputs. So, I liken it to the examples you've given here. Sometimes, the end result of logic may not make any sense if the foundation is shaky. Other times, it's the results that get contaminated even when one starts out correctly. To see these things for what they truly are, people need to acknowledge the role their bias plays in the logic they strongly believe in.
Likewise, the assumption that one cannot excel in a certain role because they haven't been there before. It doesn't take the traditional logic of maths. What it takes is trial, and the logic of doing things over and again to attain perfection. Even if it means doing it a million times after failing.
Throughly enjoyed reading from you. Don't stop writing!
RE: Airtight Logic