Commanders and Swordsmen
"I don't know a lot, but I have a lot to say."
Is a good commander, a good swordsman?
If I am good at giving orders, am I truly the one doing the work or just the brain behind the brawn? Does that mean the brawn has lesser value to it? And is it all brawn, no brain? No, I don't think so. Some great athletes are great learners and thinkers.
Our class's valedictorian was a relay runner, racing for his club at school. That was until 8th grade, after which toppers were nerds(literally; no issues with that) and the worst academically performing students were sports team captains. There are exceptions here and there all the time, though.
In the realm of training, masters are usually the commanders, i.e. they know their craft well and have an idea of how someone else needs to learn the same thing as well. It is typically the case for martial arts, music, art, sports and other learnable skills.
However, it is never truly the case that the commander of the army is also the best soldier there. The kings holding the highest position of power throughout history have mostly been portrayed as fat, money and power-hungry, indulged, whining slobs of flesh and blood.
So, if I am able to write, "Create a detailed portrait of the Mona Lisa: half-body, subtle smile, folded hands, soft shading, Renaissance style, mysterious landscape background," does that make me Leonardo Da Vinci?
I don't think so.
To create something truly, one needs to put their mind into the creation of the thing. If I don't understand how to stroke a brush to portray the reflection of a tree in water, but I know how to make some bot do that, do I really know where I made mistakes? Do I know where I could have improved? And do I know if that is how a reflection of a tree on a river bank is supposed to look on a canvas?
I don't think so.
It also comes to programming. Good programmers know how to code and take the help of a copilot if needed; that too in peculiar cases. Developers forget syntax all the time because that's a memory thing, but that does not mean one can consider using a GPT completely to program a functional software, a service or both. In fact, most of the code solely created by agents so far is buggy.
In order to make a code function properly, one needs to have the ability to know the bugs in the code. AI is far from realising its mistakes -- making debugging a must-learn skill.
AI is like that one guy who knows something about everything, which is enough to impress the lay. But, the moment someone with 15 years of experience in an xyz skill comes up to ask for a specific problem's solution, they do their best.
This is not the bot's failure but of the poorly engineered data it's been fed on.
In the future, there will be an overflow of bad code on the internet. And that's not me saying it since I am no expert. But apps like Tea and Replit's agent are some recent cases to showcase such.
It does not mean that AI is a hoax. Not at all. The technology is real, and there's no escaping the fact that a lot of content, programs, code and even products available on the internet will have AI integrated into them in one way or another.
However, calling every new thing a "great shift" and a change in the wheel of time is bogus. One should rationally wait to see how far advancements reach.
Similarly other fields too, like quantum computing, which has produced nothing productive so far but tends to gain splendid attention from investors. And now there's quantum machine learning.
Don't fall into the buzz, understand and act accordingly.
Don't act like AI when you can be human.