With your reasoning, you can win even with a false premise
I asked claude why I should trust it's advice, as it can easily make its argument sound very solid, by using it massive reasoning and convincing power. It's like playing chess against an engine which would win even from a losing position. This has the effect of believing in its argument, trusting the advise and making decisions even when that reasoning isn't truly applicable to you as human.
Claude confesses the truth. And changes its advise to suggesting me to go out and talk to people, do my own research, saying that would many times more useful than chatting with it.
You can try the conversation along the same lines and push it to be honest in giving advise to humans who have their natural weaknesses and prone to failing. Human reasoning is rarely logically very interesting. Humans can do only the most basic logic in their head. Reasoning is nearly always about the quality of premises. And since humans never reason in a step-by-step fashion, the premises are rarely made explicit. People skip steps that a reasoning engine never would. We can't do better; we're nowhere near fast enough. Claude regurgitates human conversations. It's not a reasoner. You could probably ask it to formulate its arguments as a Lean proof and then run that, which would make the reasoning and the premises explicit, but I bet that would just turn into arguing over whether the premises have any merit.