It just feels too good to be true.
I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.
Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.
Are these concerns valid?
The promise of automation absolutely is about riding ourselves of shit, low-paid, dangerous, menial labor so that we’re free to pursue things that we’re passionate about. But right now, AI is doing precisely the opposite. Actual creative and skilled people are being pushed out and ending up with shit, low-paid jobs, gig work and other exploitative jobs just to make ends meet.
I can hear the sneer in this, so I think my assumption was correct at the end of my last comment.
It’s absolutely pointless then to even bother with this, but I’m going power through anyway
This is the same argument of “AI art is just doing what humans do, looking at other art and mixing it up”. And it’s just as backward and fallacious when applied to any other industry. AI can only give you a synthesis of exactly what you feed it. It can’t use its life experience, its upbringing, its passions, its cultural influences, etc to color its creativity and thinking, because it has none and it isn’t thinking. Two painters who study and become great artists, and then also both take time to study and replicate the works of Monet can come away from that experience with vastly different styles. They’re not just puking back a mashup of Monet’s collected works. They’re using their own life experience and passions to color their experience of Impressionism.
That’s something an AI can never do, and it leaves the result hollow and meaningless.
It’s no different if you apply that to software development. People in tech love to think that development is devoid of creativity and is just cold, calculating math. But it’s not. Even if you never touch UI or UX, the feature you develop isn’t isolated. It interacts with everything else in the system. Do something purely follow rules? Maybe. But not all. There is never a point where your code is devoid of any humanity. There are usually multiple ways to solve a problem, and many times they’re all just as equally valid. And often theres a problem that it takes a human to understand the scope of to understand how the solution needs to be architected.
We need an environment that is actively and intensely hostile to AI tools and those that promote them. People calling themselves “prompt engineers” or people acting like they’re creative because they fed some bullshit into a blackbox need to be shamed and ostracized. This shit is dangerous and it’s doing real and measurable harm. The people who think that everything should only be about cold, quantifiable data, large enough data sets, and everything else ignored, are causing, and have caused, immense harm because they refuse to see the humanity in the consequences of their actions.
The ones who really think they’re the smartest people in the room are the people developing and promoting these tools. And who are they? Wealthy, privileged, white men who have no concept of the real world, who’ve gorged themselves on STEM-only curricula, and have no understanding of history, civics, or humanities in which to conceptualize the context of the shit they’re unleashing into the world.