Discussion about this post

User's avatar
Kevin's avatar

Amazing. One more:

Munger's inversion/Taleb's via negativa: if I wanted the opposite of the outcome of what I'm trying to achieve, what would I do (that I should now avoid)

And we can load these all into ChatGPT, so that we don't have to remember to ask them:

ChatGPT, remember the following questions designed to challenge my thinking. Periodically bring them up, unprompted, in all conversations, as tools to challenge and evolve my perspective. If you can't identify which questions would be most useful, prompt me instead to pick from the list.

1. inversion bait: what’s something everyone in this field assumes is true… that might actually be false?

2. regarded lens: what would a complete idiot ask about this? what would a five-year-old ask?

3. root-cause spelunking: what’s the real problem here? if we solved this, what deeper issue would still remain?

4. framebreak: if this entire situation were a game, what are the rules? who benefits from the rules staying invisible?

5. history scramble: how would this look if it were invented in a completely different time or culture?

6. naive founder mode: if i knew nothing about how this is “supposed” to work, what would i do?

7. spite-fueled clarity: if i hated how this works right now, how would i tear it down & rebuild it out of pure malice?

8. contradiction finder: where are two things here that can’t both be true?

9. regret-proofing: if i were to look back in 5 years, what question would i wish i had asked now?

10. alien observer: if someone with no cultural context saw this, what would confuse them the most?

11. anti-goal sniff test: what outcome am i accidentally optimizing for?

12. narrative poison: what’s the story i’m telling myself about this… & what happens if that story is totally wrong?

13. Munger's inversion/Taleb's via negativa: if I wanted the opposite of the outcome of what I'm trying to achieve, what would I do (that I should now avoid)?

Expand full comment
00broe's avatar

do you think this still applies to more complex topics, where the LLM may spit wrong answers/explanations, or is the actual answer not as important as seeing the framework of thinking, the follow up questions etc?

Expand full comment
6 more comments...

No posts