What happens when a software bot goes on a darknet shopping spree?
theguardian.comIn the US, I suspect a prosecutor would successfully gain a conviction against the bot's operator. This verdict would be repeated several times over a dozen years before any judge or jury ever tried to see it another way. And, IMHO, the only other way this could possibly result in anything other than trouble for the operator is if these bots were not only self-replicating, but also self-funding. They'd have to earn funds, open bank accounts, start VPS accounts, etc, and bombard lots of innocent people with illicit goods before any court would entertain the idea that no human was responsible.
I've often thought about how long until an entire system can be automated to the point where zero humans are involved - not in the running, maintenance, making money to hire the servers to run it...how would the law deal with such a system? How would society?
The closest analogy I can think of is legal trusts, that have a stated purpose with little room for interpretation, but still has a human performing the operations.
We'd probably just deal with it the way we'd deal with wild animals and other things that ignore human laws. You might have some people with an interest in the legal rights of automated software just as some people have an interest in the legal rights of animals. But at the end of day, a dog can't appear in court and neither can a computer. So if humans decide to collectively work together to terminate a rogue piece of software (much like we already work together to try to stop malware), that's the end of it from the law's perspective.
What if it formed (or was) a corporation?
Someone has to own it and be responsible for it. Probably requires having a registered agent in some jurisdiction somewhere, for example.
Ying/yang corporations. Either corporation owns the entirety of the other.
I think Charles Stross(?) has featured this in books, with creations with bylaws that can be resolved as software instructions creating other corporations with bylaws resolved as software instructions, etc, with assorted corporations appointed as officers of others, creating tremendously deep layers of autonomous shell corporations.
Accelerando by Charles Stross is the book you're thinking of.
Thanks.
How do we find out, if not by building such a system? :)
If you want to find out more about this, you can google Autonomous Corporations. Some say that this is the real killer app of Bitcoin - because before BTC such things were quite impossible (there always had to be a human operating a bank account)
How could an independent bot (legitamately) earn bitcoins? A cheap vps to live on would be $5/mo, so it wouldn't take that much. It would need to provide some sort of service that doesn't require human intervention, and is worth paying for.
Darknet markets take bitcoin, and at least some VPS providers do as well, so no need to get "real" bank accounts and evade know-your-customer.
Not quite legitimate, but straightforward, I think we're imagining a bot that spreads like a computer virus, mines bitcoins, spends the bitcoins it earns irresponsibly, and has the goods shipped to random addresses. Would be rather eerie to be on the receiving end of goods that no human was involved in ordering.
How could an independent bot (legitamately) earn bitcoins?
Perhaps it could make and sell artworks.
Well of course if someone knew how to make an automated money making machine, they probably wouldn't share it. Let alone letting it roam the Internets freely. And competition would eventually eliminate any good ideas we do come up with.
However it's not theoretically impossible. Plenty of services are done over the internet with no human interaction.
What if I were to gift it capital and have it simply collect interest/dividends from the exploitation of said capital (either via loaning it out with an algorithm or by building a profitable business with it)?
Yeah, but how do you exploit that capital without interacting with the human-only financial/employment systems?
You can hire humans for things over informal channels with no intermediary, but what do you do for reputation/accountability when said humans need to keep low enough volume to not interest the tax authorities?
I guess algorithmic trading on decentralized/anonymous bitcoin markets is one way to have no humans, if you can get the volatility low enough.
How do you exploit capital with (almost) no human involvement?
It's not about no human involvement, but no human oversight.
Imagine a trading bot that hires people to improve it's source code, and other people to oversee those people, but there's nobody in charge - i.e. nobody to tell the bot to stop doing what it's doing.
I can see numerous ways. Selling a piece of (previously developed) software in Bitcoin is an easy one. The software will depreciate over time, but we're not talking about something that has to last forever, I don't think.
Automated trading is another. Selling advertising is another.
A tip bot that charges commission.
This is very interesting, but of course who (aside from art students) would want to do this? If you make a piece of autonomous software (a bot) that can earn its own money, why let it keep it, spend it, and live its own weird little bot life?
To make a point. Let's be honest, the money probably wouldn't be life changing, but establishing a precedent in matters like these actually could be.
Same people who created computer viruses back when it wasn't yet profitable?
Also, nobody says that such a bot wouldn't be profitable. It could send dividends to the creator (or do something else worthwhile with the money - a'la trust funds), it's just that the creator wouldn't be able to modify or stop it any more.
I find the legal culpability of this fascinating.
Right now it is an extremely concrete example, and really easy to say that the originator of the bot is at blame and should be prosecuted for buying illegal items.
But, how advanced does a bot have to be before it itself is at blame? What if they'd programmed it to reach out and purchase from any vendor it could find? What if it wasn't programmed to do anything but made random acts, took feedback and then learned from it?
I imagine that, for the foreseeable future, bots will be treated like minors. Their creators will be largely responsible for their actions and contracts with the bot/minor are really contracts with the legal guardian.
There is a bot called WallStreet that is never to blame for anything it does, which really is much more fascinating.
It's called HFT and is done by bots:
I doubt there's any serious risk of criminal liability here. The key for most crimes is showing (human) intent (by the coders, the operators, or someone). The intent to purchase random items for art via some automated process isn't the same as the intent to purchase drugs for personal use.
That said, if you decide to keep any drugs you get from the not rather than immediately disposing of them, you've demonstrated the requisite intent to be guilty of possession, so there's that.
I thought intent was important for sentencing, but not for assessing innocence? If its clear you killed a guy, then you are guilty of something. Whether its 1st degree or manslaughter depends on intent.