AI is a Scam, But Don't Let That Spoil Machine Learning

12 min read Original article ↗

I have gone on the record many times saying that I'm an "AI skeptic."

Why? Because the entire industry reeks of the move fast and break things mentality. Coupled with that is the all-too-common cult of personality you see everywhere in the tech world and it just all screams classic Silicon Valley scam. At least to me.

That's not even factoring in all the fear mongering about how these chatbots could–at any moment–suddenly sprout sentience and kill us all.

Thankfully, from my understanding of this tech, that's not how any of this works.

But all of this is clearly an attempt to get your clueless Senator (who's been in-and-out of hospice for the last decade) to grant them a legal monopoly over artificial intelligence.

Frankly, chatbots and image/video generators? They're an evolutionary dead end. At least if the goal is true artificial general intelligence.

What's worse, though, is that these carnival barkers at the top this industry? They know this all too well. But the whole "AI" world is built on a foundation of deception, hype, and accounting trickery. They have no choice but to ratchet up the hysteria so they can profit from popular fear over the tech they're "building."

But I cannot stress this enough: none of this is new.

None of these companies have invented anything. Not the Machine Learning algorithms, not the chatbot user experience, not the hypetrain nonsense... heck, they didn't even invent the swashbuckling approach of stealing the web's public data.


None of This is New

This technology has been around for quite some time. Well before the "AI" hypetrain set phasers to maximum stupid and ketamine was the drug du jour of the Altman-likes... there were serious researchers poking and prodding at machine learning algorithms.

The Altman-likes don't want you to know this, but YouTube has been using machine learning speech to text to generate automatic captions for YouTube videos for over 16 years. Same with Google Translate and many other language-based services.

Like I said at the top of this article, there's nothing new about this technology. The algorithms have been around for half a century.

The algorithm that powers autocorrect and text prediction on your smartphone's virtual keyboard? That's basically the same one that powers ChatGPT... the biggest difference being ChatGPT consumes a small corner of the Amazon rainforest as it's helping you cheat on your homework.

I recall using StumbleUpon way back in the day (when that was still a thing). If you don't know, SumbleUpon was a browser extension with a little button you could click and it would take you to a cool, random website based on interests that you pre-selected.

And way, way back in the early 2010's I stumbled my way to Google DeepDream. DeepDream was ML image generation while the founder of Midjourney was still in diapers, probably.

A depiction of Starry Night as passed through Google DeepMind

Kaleidoscopic fractals, starry night mashed with puppy faces–pareidolia codified in the electric dreams of Google's spare compute power. All the way back in the 2014.

Again, none of this is new.

Even locally-run models were a thing. Back in 2017 when I visited System76 for their Legend of the Lake Superfan Event they had local AI models transmogrifying webcam input into... whatever this is:

What the actual hell is this model *trying* to do?
Source: System76

The only real breakthrough with AI has been Nvidia and Microsoft's $100 billion accounting tricks–and even that isn't novel.


But The Review is About Good Tech

I'm not interested in the fear mongering. I'm not interested in the liars and frauds like Sam Altman. I'm interested in tech that is good and useful to humanity.

And there are real and amazing applications for so-called 'consumer-facing' machine learning technology. I want to draw a distinction between the "AI" hype and BS and the reality of "Machine Learning" tech.

Make no mistake about it, it's easy to succumb to cynicism. Seemingly every service is jamming useless chatbots into their apps and sites? It makes no sense.

And the fact that their chatbots are universally worse than the search bars they're replacing? Yes. It's clearly very bad.

So when you hear "AI," it's almost forgivable to be jaded and think "all AI is just stupid."

That's why, unless I'm talking about a specific product name or feature, I'll be referring to the tech as "machine learning" or "ML" from here on out.

💡

I want to acknowledge that the significant advancements of ML models has only been possible due to the theft of online artistry. It's also difficult to audit the provenance of data used to train these models–even the ones I'm going to talk about in the rest of this article.

So I wanted to make sure to point out that I'm not hand-waving away, nor condoning, the copyright infringement that has definitely occurred in the training of contemporary models.

What Can ML Actually Do?

Machine Learning is capable of some awesome stuff. You can train a neural algorithm on a massive set of sample data and it's then able to transform inputs based on that data–almost like magic.

In very specific use-cases, ML is a wonderful tool that enables things that are hard–or even impossible–to do with other types of software.

Audio Transcription

One great example is automatic audio transcription. As I mentioned earlier, YouTube has been using it for years.

But other apps have been rolling it out, too. My favorite example (since it's one I use with increasing regularity) is DaVinci Resolve Studio's "AI Audio Transcription."

DaVinci Resolve 20 using AI Audio Transcription

Picture this: You can have a local ML model (that's entirely optional and not in any way required) transcribe the audio of a long video clip. Then that audio is fully searchable using an existing search algorithms.

What a concept, eh?

Well, that's exactly what the "AI Transcribe" feature does. But not only that, after you search the transcript for specific terms, you can highlight text like you would in any word processor. Highlighting the text will set the in and out points of the clip in question...

AI Transcription dialog in DaVinci Resolve Studio 20

...enabling you to just drag the highlighted phrase into your timeline!

It also has other related tools like "AI Subtitle Generation" which is how I'm able to quickly and accurately generate subtitles for my PeerTube videos!

Voice Isolation

I'm going to apologize because most of the "ethical" ML features I use are in DaVinci Resolve. "AI Voice Isolation" is another one of these tools.

For example, I edited the first episode of Maine Famous' fourth season last week and one of the microphones was picking up a high-pitched squealing. I didn't notice while we were on location. If this had been just three years ago, I would have had to do hours of audio cleanup work and there'd be no guarantee that the audio would even have been usable!

But today? No problem! I just passed it through DaVinci's "AI Voice Isolation" filter and it cleaned it right up!

0:00

/0:17

WARNING: there is a very high pitched, very annoying squeal in this video.

I'll spare you the other awesome ML tools that DaVinci Resolve Studio ships. Suffice it to say, these are time-saving features that reduce the tedium of certain tasks and empower users to recover otherwise unusable footage... all while adding to the creative process (and, critically, not robbing us of it).

Object Selection

Affinity's products are also quite good and Affinity, like DaVinci, offers entirely optional ML models that assist with tedious tasks.

Object Selection is the big feature here. It's not perfect. Not even close. But it can do the heavy lifting in 80% of photos.

What do I mean?

0:00

/0:41

The model analyzes the image–again, locally on your machine–and then it lets you select content in the image. Hovering your mouse over different objects in the photo will display a preview of what will be selected. Then, just click to make your selection.

You can add other objects to your selection by holding Shift and clicking and it's a pretty neat feature.

Though, it should be noted that the above image is an example of how it can go wrong. The low contrast between my shirt and the background results in the ML selecting a whacky shape where my shoulder should be. But it gets you most of the way there and, while cleanup is almost always necessary, it's usually less work than if you used the spline tool to trace a sharp edge around the subject.


But what about FOSS?

Well, thankfully, most open source software isn't falling for the principles of enshittification that have mandated the AI-shittification of everything.

But that's not to say useful ML tools aren't being embraced by FOSS applications! Nextcloud, for example, has Recognize a Nextcloud app that enables facial and object recognition!

Image thumbnails categorized into groups like "car," "cat," "church," "clock," "coffee," and "comic"
Admittedly, the models could still use a little fine tuning, but it ain't bad.

This is entirely optional and entirely local. And importantly, this is a critical feature if we want a self-hostable app like Nextcloud to work as a drop-in replacement for Google Photos or iCloud. Once you have it enabled, it's also 100% automatic. You don't even need a GPU to use it!

And then there are some interesting things on the horizon. For example, the next major version of Audacity will be able to use some open generative models created by Intel. That should be interesting but, given how huge the next Audacity update is, I'd be surprised if we see that any time before Q3 2026. Still, though, it's in the pipeline.

However... there is an elephant in the room. Or should I say a fox on fire?

Firefox

I've criticized Firefox–a Free and Open Source web browser–for incorporating AI into the browsing experience.

What gives? What's the difference?

Well, you may have noticed that DaVinci, Affinity, Nextcloud, and Audacity use models that are entirely optional. You opt in to using these features by downloading the model to your PC and then you run the model locally on your machine.

My very first rule of ethical AI use is that it must be local. If your AI is running on Azure, Google Cloud, or AWS? It is categorically unethical.

It would be one thing–heck, I'd probably be excited about it–if there were a 3GB "Firefox AI" model that you could choose to download and it would execute on your GPU.

But guess what? Mozilla's solution for AI in your browser is to foist the well-known, highly proprietary AI-as-a-service hucksters onto their end users.

ChatGPT, Gemini, Claude, and Copilot... you know, the flat earthers of the machine learning world.

And not only that, but Mozilla (to reiterate, the Free and Open Source browser... the inventor of the Rust programming language... the organization that proclaims itself the protector of the free and open web–doesn't even provide a way to use a local LLM like LLAMA, Qwant, or DeepSeek.

And not only that, but Mozilla (to reiterate, the Free and Open Source browser... the inventor of the Rust programming language... the organization that proclaims itself the protector of the free and open web) doesn't even provide a way to use a local LLM like Llama or Qwant.

Your choices are Google (proprietary), OpenAI (proprietary), Microsoft (proprietary), or Anthropic (proprietary)... and you'd better thank them for it, too. They're giving you "choices".


(Thank God For) The Bubble

You've probably heard of the AI bubble. But what is a bubble?

An economic bubble is a period when current asset prices greatly exceed their intrinsic valuation... Bubbles can be caused by overly optimistic projections about the scale and sustainability of growth, and/or by the belief that intrinsic valuation is no longer relevant when making an investment.

Many a bubble occurs when investors (usually ignorant) put wads upon gobs of cash into assets they don't understand, inflating the price well beyond any real value of the asset might have.

And while "bubble" might be a cute little euphemism, I think a more accurate word for it would be death spiral.

See, ants will leave a trail of pheromones that help guide their fellow ants back to their colony. Basically, a death spiral is when ants get locked into marching in a circle.

The problem is, in a death spiral, the pheromones of the circle grow stronger and stronger the more individuals join it. They get high on their own supply, as it were, they are never able to find their way back to their colony and most end up dead.

That's what I believe is going on with AI. But instead of ants, it's tech leadership and investors. And instead of pheromones, it's Special K.

So what happens when the bubble pops? Well, Jensen Huang is going to be fine. He'll move on to his next scam. Likely something equally as stupid as Bitcoin, the Metaverse, and AI Chatbots...

But the tech industry will have to wake up from its years-long AI bender and step over the headstones of brands like Crucial and Samsung's SATA SSD division. We'll have to collectively reconcile with the trillions of dollars in wasted capital, lost investments and ruined 401ks, and fess up to the technological dead end that was AI Chatbots.

Interestingly, though, we'll have thousands of ML models just kind of... lying around. Models that were discarded before ever seeing even modest optimization because there was a newer, shinier model to distract us.

And maybe, when all this hype subsides and the chatbot moment is an embarrassing footnote for the history books, machine learning's intrinsic value will finally be judiciously and ethically embraced where it makes sense.


Gardiner Bryant

About The Author:

Gardiner Bryant

I'm an educator, free software advocate, and storyteller. My passion lies in Linux gaming, self-hosting, the fediverse, and the human stories behind the tech we use every day. I believe in privacy, justice, community, and integrity.