My experience with Large Language Models

5 min read Original article ↗

Artificial intelligence has changed the world as we know it. Many companies are dropping of their previous metaverse frenzy to get their hands on some of the potential profit that AI products offer. At this point, you might be familiar with the concept of what an AI model is, or at least, a rudimentary image of what the concept entails.

But what if I told you that we are not quite on the AI era? Or that AI will not "render out jobs as X useless"?

Well, we might not be there just yet, and it might seem that it's progressing at a rather alarming rate, but in reality, we are really far from the concept of a fully automated, accurate and efficient AI.

With this article, I want to talk a little bit about LLMs (or Large Language Models), my experience with them, as a student and frontend developer, and their impact that they have on our world.

What is an AI? link

An AI model, at its core, is a big neural network, trained using hundreds, thousands, heck even millions of tokens. From what I understand, tokens are basically words; whenever you see a model with a number and a letter as it's suffix, it usually means the amount of "tokens" that they've been trained on. These models are trained using rewards, like we use treats to tell our dogs what's good or bad. What I mean by this is that, these models "try" to predict which response out of a given prompt gives out the most results (aka. the best result). This process is called re-enforcement training and it's the most widely used training method used in AI models.

Now that you probably have, at the very least, a better grasp of what I am talking about, and how it works, I'll talk about my overall experience with Gemini/Bard, Google's "magnum opus" on the current AI arms race.

My experience with Gemini link

If I could summarise my experience with Gemini in a singular word, it would be "clueless".

"Why clueless?" You may ask.

I say clueless because, most of the time, you have to use it's double check feature to make sure it doesn't hallucinate, other times, it would tell you that it doesn't know the answer to the question.

You might say that the questions that I've asked the model were "too complex" or "too recent". But nope, they were neither complex nor recent. > In fact, I asked about the life of a previous governor on la Hispaniola (which if you didn't know, is the name of our island), Nicolas de Ovando, and the model replied with:

"I don't know the answer to the question yet"

  • Gemini

Afterwards, I was able to look up the answer> to the question I had in like, no time at all.

Another example would be generating code, specifically Kotlin. I asked about Kotlin naming schemes, as I didn't understand how to name files. It took me about 3 to 4 separate prompts to get an answer I could understand and apply to my code.

General thoughts link

Whilst I do understand that AI models are in continuous development, and that they will get better as time goes on, I think it's wrong to advertise them as such. As artificial intelligence, I mean.

This might seem like I am downplaying the efforts of countless bright minds behind this amazingly complex invention, but I personally think that they are nothing but fancy and expensive prediction machines. Mostly because they are really good at recognising patterns and predicting what should go next; after all, that's how they choose responses.

Ethical issues link

To conclude, let's talk about ethical issues with AI.

As you probably know, I am an artist. I draw furries and I publish my art on Pixelfed and Mastodon under CC-BY 4.0 I licence my artwork under a creative commons license to avoid art theft, I don't want other people to steal my art without my explicit permission because I take a lot of time to get motivation to draw, and actually finish the artwork.

It turns out that AI models like Google's Gemini, OpenAI and Microsoft Copilot take all sorts of copyrighted material, from licensed code without attribution, to artwork from artists that have a watermark to avoid the previously mention art theft. And those companies are perfectly fine with such practices, in fact, let me mention a quote from Sam Altman on an interview conducted with him:

"AI wouldn't be possible without copyrighted material"

  • Sam Altman

That, of course, is completely unethical, but that's the sad reality we are living in. Currently, there are no regulations that prevent this kind of activities from being performed by companies, it's a legal gray area.

I feel like when that gets sorted out, then more people will start using AI. Oh, and let's not forget the fact that a bunch of companies are shoving AI usage to your face every time. Not sure if that's even ethical, it probably is but it might as well not be.

It's a mess.