← Longform
March 21, 2026
AI Tools

I have been expirementing as much as I can with AI tools like Claude. There is a sort of generalized optimism associated with using these tools, they quite literally feel like magic. For me, someone with no coding experience, I am able to take a standard English sentence and Claude, with almost no discernible effort, can built something for me. The tech future is here. In conjunction with the optimism there is a generalized sense of ennui. As these tools become more and more efficient at mimicking a wider variety of human actions at a much faster rate, there’s a nagging thought that humans won’t be a part of the conversation at all. It will be robots talking to robots with humans unable to move at the pace of, or produce at the rate of, AI products.

I have begun exploring my own rules for using these tools that I will break into 3 parts.

  1. What I protect, and deliberately don’t use Claude to do.

  2. How I am using Claude to do things I cannot do.

  3. What I am using Claude to do that I find useful, but could do myself.

Protection:

I want to protect my ability to do work and think deeply and in order to do that I need to ensure I am not using Claude to write or think (truly think) on my behalf. Reducing the mental load that comes with leveraging Claude is a quick route to reducing capacity. As such, I don’t ask Claude about my feelings, about my career strategy, or personal aspirations. I only ask Claude questions that will inform my decision making process (i.e. what % of the Fortune 2000 CEOs have an MBA).

I also do not use Claude to write. Period.