If you look at tech Twitter, it feels like artificial intelligence is the only thing anyone is discussing. AI is a topic worth exploring and something I should probably look into more myself.
That being said, as someone watching from the sidelines, I do have some observations that I wanted to jot down somewhere.
I think the technology behind these major AI projects is cool. I am a tech enthusiast through and through - so when new stuff like this comes along, I do find myself a bit giddy to try it out.
I don’t personally have any real-world use cases for adding AI to my current workflows, though I imagine there is some potential. Most of the experimentation I’ve done with DALL-E 2 or ChatGPT has been like a child with a new toy.
I can see some helpfulness of it in solving some code-related problems, though I do feel like careful review is necessary when working with these tools.
Era of Close Enough
One thing that I quickly realized while playing with DALL-E 2 was that it is never quite what I want.
Like whenever I write a prompt, I try to describe what I’m picturing in my head - and although the AI comes up with some pretty great interpretations - it’s still never quite right.
I’m sure as these tools evolve, learning more and more, and with the right amount of iteration on a concept, I can get even closer to what was in my head, but I’m not sure it will ever be exact.
How to Learn
AI learns by observing. This means it takes in large samples of text, art, music, code, and whatever else to figure out how to best generate a response.
There is a very important argument to be made here about utilizing copyrighted materials, especially if the companies behind AI are profiting from it in any way. Even more so when AI is blatantly plagiarizing these copyrighted materials.
There is another side of this that has crossed my mind: People also learn by observing. People learn from others, including copyright materials.
The speed and scale of how AI learns compared to humans is an obvious difference. Also, I’m not sure if AI truly knows something or if it’s just ridiculously fast at looking things up. The core idea remains, though: learning by observation.
I don’t think AI is going to be taking jobs anytime soon. Not saying it won’t happen, automation has been an upward trend, but I don’t think it will happen too quickly either. AI is evolving very fast, maybe too fast to actually be a good idea to implement it as a core part of businesses. Plus, as it evolves, it becomes more and more expensive to actually maintain.
I think when companies stabilize the growth of this technology, it may become more of a concern.
The more likely outcome of AI, rather than taking jobs, is the new opportunities it presents us. I think people implementing AI into work and creative flows do give us the chance to enhance what we do now.
Learning to adapt alongside technology - just like we did with electricity, radio, and computers.
I also believe AI could open up more avenues for assistive technologies, working with AI to perform tasks that may have been more difficult in the past.
At the end of the day, it is very easy to get caught up in the idea that AI is an invasion with the number of folks talking about it online. You tend to forget that these voices are not a representation of everyone. Just like the popularity of cryptocurrency and NFTs, this discourse mostly falls into the bubbles we put ourselves in.
Most non-tech folks that don’t necessarily spend their time doom-scrolling social media either have no knowledge of these topics or a cursory understanding at best. Once more “normal” people engage with AI, then I’ll start to believe there is more to it than what it is now.