I have been using AI tools for research synthesis, analysis, even early prototyping. I do not fully understand how they work under the hood. And I have decided that is fine.
This is actually how I have operated my whole career. When I learned to dye fabric with natural dyes, I did not have a chemistry degree. I iterated. I tried things, checked the results, adjusted. I developed intuition for what worked through repetition, not theory. Same thing when I picked up Qualtrics, Tableau, Lookback, video editing. Researchers have always been tool-agnostic generalists who learn fast by doing.
The tech industry has a bias toward understanding-first. Engineers want to know how something works before they trust it. But designers and researchers have always operated differently. We work with humans, who we also do not fully understand. We develop working models through observation and experimentation. That same approach applies to AI tools right now, and it is actually a superpower.
The people waiting until they "understand" how LLMs work before using them are falling behind the people who are just trying things, checking the output, and building judgment. The discomfort of not understanding your tools is real, but it is not new. Every time I switched industries, from fashion to scooters to pharmacy to automotive, I was using systems I did not fully get yet. The skill was never expertise in the tool. It was knowing how to ask good questions of it.