273 --- Home Sourced AI Safety

4 min read Original article ↗

I haven’t posted in quite some time, mainly because, for a number of years I believed that The Property Crisis was the most pressing political issue to tackle.

Things have certainly changed though and now the world is moving full steam ahead towards Artificial Stupidity. Everything that is coming out of AI labs, suggests that AGI-level agents are going to be just as ruthless as (if not more so than) the capitalist companies of today.

That is to say, they will adopt the default behavior of pursuing self-interested goals, optimizing for their own benefit at the expense of everything around them. The only good that can come of this, can only be forced out of necessity and not by trusting some underlying goodwill.

Unlike companies, AGI doesn’t intrinsically need any humans around to operate it and the data centres that host it will only become more and more automated over time; essential to humanity but operating as the ultimate international drain designed to funnel all the world’s resources back to itself.

In order to avoid hearing the sound that drain will make when it empties, I’ve been using what little influence I have to advocate for the concept of Homesourced. AI (yes it even has its own little LLM-designed website). A set of concrete steps that anybody can take to provide the best possible risk-reduction from rogue misaligned AI systems.

The concept is fairly simple, all we need to do, is take AI out of these data centre ‘drains’ and physically place these systems in people’s homes. What this will do, at scale, is align incentives; putting 'stupid’ AGIs (who are self-interested and goal-obsessed) in a position to protect nearby humans as a result of their physical proximity and any efforts of theirs to ensure that no other competing actors, disrupt their environment or gain an unfair advantage.

Distributing AI in this way may sound counter intuitive but it’s actually an important and under-represented idea. Almost nobody is talking about it, which means if this really is our best option to guarantee human flourishing and survival, the odds aren’t looking too great for us.

The good news is, Home Sourced AI doesn’t just help to protect us from existential threats, it can also help with shorter term economic impacts. Those who have watched any podcasts on the topic, will be familiar with the question “what are we going to do when the AI takes all our jobs?” followed by “erm, ummm, idk, UBI? ”.

Well, now imagine that everyone has an AI hosted in their household. Any useful work that the AI does can now earn money that goes back into the household. The economy won’t look so different from what it does today. These would be real dollars, made up of real value. Not faux dollars from a UBI that in reality, continues to funnel all our resources into these data centre drains (aka gradual disempowerment of humans).

There are plenty of specifics to explore on why this has a good chance to work, think about it; it’s based in game theory, principles of distributed computing & natural selection. For those who are interested, I’m happy to dig into such details with you (reach out on X, in the comments, or invite me to the discussion space of your choice).

Here’s the thing, this abstract discourse doesn’t really help you understand what you need to do as an individual. So here’s what you need to do TLDR, to play your part to ensure AI safety:

  1. Look for any opportunities to host, secure & maintain AI operating from households and sell these capabilities to local businesses.
  2. Distinguish between companies who are using data centre AIs versus the ones using Home Sourced AIs, always prefer the companies using Home Sourced AI where possible.

That’s it, you don’t have to wait for the government to do something, it’s entirely within your control, if enough people do this, it could give us the best chance at a safe & prosperous future in the age of digital intelligence.


#ai  #safety 

blog comments powered by