How ChatGPT Can Hurt Your Problem Solving Skills: An Anecdote
An anecdote on how my reliance on ChatGPT has lead my problem solving skills to get worse.
Introduction
ChatGPT is one of those tools that are -without exaggeration- revolutionary. With its release, the world enters a completely different phase, which is only natural. With a tool that relieves you of the tedious searching you had to do and digging around the internet for answers, you get your results by prompting it.
Discovery
When it was released, everyone was using it. I started noticing people from my circle ChatGPT'ing their way through everything. From solving math equations, summarizing documents to writing cover letters and presentations. It was wild.
I couldn't blame them, I saw how practical it was, how much time you could save by using it, which is really what tools are for. However, I am personally very slow when it comes to adopting hyped up tools, even when it's as grand as ChatGPT. It would require you to subscribe to releases of shiny new things, which I find distracting as there are new tools getting published very frequently.
Last July, I had my finals. And as per usual, you have to go through ~200 slides of materials per class (we had 8, so that's 1600 slides). After the course of some months, and after seeing how effective ChatGPT has been for my friends and classmates (and occasionally myself as well), I had decided to use it to prepare for exams, since -naturally- I am always behind when it comes to studies. Since I am someone that's somewhat articulate and precise when it comes to learning, most of my prompts where of the sort:
- What's the difference between UART and SPI in embedded systems?
- What's the difference between security and liveliness in distributed systems?
- What's the difference between a service and a thread in android? etc.
Getting hooked
Little by little, I started developing the reflexes to prompt ChatGPT for answers instead of using DuckDuckGo. I started noticing that I am visiting websites like Stack Overflow and GitHub less and less for answers and code snippets, and sticking with ChatGPT for everything. Only to notice 2 weeks ago what my reliance has lead to.
The amount of time I spent thinking of solutions decreased, I would almost always just prompt for the code and paste it with some adjustments. I noticed that my problem solving became troubled and slow, that it has deteriorated.
Trying to get unhooked
I started to consciously fight the urge/reflex to prompt for answers, and try to come up with ones of my own in the traditional way I used to. Thankfully, I am slowly getting back to normal.
The solution here lies not in one of the extremes, either relying almost completely on the tool, or refraining from it, rather it is somewhere in the middle. To find a balance in which you could benefit from the tool without relying too much on it to the point of damage.
I personally found the middle ground is to limit my prompts to a set of questions/scenarios:
- Scaffolding projects or libraries.
- Comparative questions, especially when it comes to figuring out which tool to pick for which job.
- Learning new concepts.
Conclusion
The key takeaway I would say from this anecdote is, to pay attention to how we're using our everyday tools, how much we're relying on them, whether that's a good or a bad thing. Because tools affect us, and as ironic as it sounds, we could very well end up being enslaved by our creation.
Note: Funnily enough, I prompted for a title and a description for this post (they suck, I am going with my own). Guess I have some more work to do...