Do You Trust This Computer? [video]
doyoutrustthiscomputer.orgRegulation comes up later in the film. Ross Anderson, professor at Cambridge University recently wrote, https://m.cacm.acm.org/magazines/2018/3/225467-making-securi...
"Once software becomes pervasive in devices that surround us, that are online, and that can kill us, the software industry will have to come of age. As security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection. The notion that software engineers are not responsible for things that go wrong will be put to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve."
The Ivory Tower has a way of phrasing concepts such that they are framed by finality and totality. The mentalility comes with having made it through the admissions process, and passing your final exams with a good grade point average, conferring some lofty degree of distinguishment into one's possession.
So, to look at the words:
I'd have to say that this sort of high-minded platonic concept needs some revision.The notion that software engineers are not responsible for things that go wrong will be put to rest for good.
There needs to be a degree of responsibility ascribed to some classes of systems development.The notion that *some* software engineers *cannot* be found as responsible (in part or in whole) for *some* things that go wrong will be put to rest in *some* situations.Meanwhile, there is very obviously a line to be drawn between the programmer that programs their VCR clock to time a recording, the programmer that programmed the VCR as a consumer-grade product intended for purchase by unlicensed individuals, the TV network that broadcast the television show at the time the individual programmed their VCR to record 60 minutes of broadcast on a given channel, and the programmer who locked me out of the firmware on my smart phone.
> software engineers
I've had the idea for a while that most of us practice software development rather than software engineering. I have a degree in computer engineering but i consider myself software developer now rather than a software engineer. The reason, I don't practice engineering in the legal and professional sense.
In engineering school we learn about engineering as a formal process and professional responsibility. Both of these things are largely absent in most shops now. I get that not all projects need to be professionally engineered with all the costs and timelines associated with it. I think this is why agile came along. Sometimes it's just good enough to hack something together and demo it until a manager says it's time to release.
But there are many other projects which are extremely important to society and should follow more traditional engineering practices. There shall be external and internal engineers who must formally approve any product before release. There shall be specific and testable formal requirements. There shall be a formal design and documentation for engineers to review and people to develop from. etc. etc.
There's no legal framework, to distinguish devices that matter from devices that don't.
There's no clear demarcation between devising a convenient contraption, versus implementing an inadvisable hack that leads to a hazardous outcome, especially within the scope of web based systems, since no portion of the internet is to be regarded as reliable life-saving infrastructure.
I think the mistake is to trust packet-switched networks and peer-oriented protocols as reliable systems at all.
If you cannot control the whole system, end-to-end, and any unwitting peer can over-consume bandwidth (jamming traffic and communication with interference), effectively cutting you off from a necessity, why would you bet your life on the availability of that system?
Work is underway to support Time-Sensitive Networking in modern operating systems: https://schd.ws/hosted_files/elciotna18/b6/ELC-2018-USA-TSNo...
An effort was started by Mudge to use static analysis and fuzzing to assess a range of software, https://34c3.cyber-itl.org & https://theintercept.com/2016/07/29/a-famed-hacker-is-gradin...
"... first-of-its-kind method for testing and scoring the security of software — a method inspired partly by Underwriters Laboratories, that century-old entity responsible for the familiar circled UL seal that tells you your toaster and hair dryer have been tested for safety and won’t burst into flames. Called the Cyber Independent Testing Lab, the Zatkos’ operation won’t tell you if your software is literally incendiary, but it will give you a way to comparison-shop browsers, applications, and antivirus products according to how hardened they are against attack. It may also push software makers to improve their code to avoid a low score and remain competitive."
This is a flawed concept from the outset. You either have appliances, or computational platforms.
Turing-complete systems are arbitrarily flexible as a matter of principle.
If the firmware can be altered; if any addressable memory can be changed, and a system relies on an internet connection for maintenance and support, it is an unreliable system.
Their site mentions relative assessments of different products, not absolute claims:
"These sorts of questions can’t be answered in an automated fashion, due to theoretical obstructions ("undecidability") first identified by Alan Turing. Thus, to measure security in a practical fashion, we employ heuristics. We don’t need to find any specific vulnerabilities in order to assess how secure software is. Instead, we can observe the software’s safety features, build quality, complexity, and other heuristics. Some heuristics directly impact software security, while others might just be properties of software that are generally only found in cases where development teams know what they’re doing. As long as they correlate, it doesn’t matter."
Sounds like stop-and-frisk TSA pat-down security theater. This is not the kind of concept that I'd ever expect to withstand a determined adversary.
Especially for machine learning scenarios, don't expect a show of force to prove as an adequate deterent. Posture and presumptive correctness aren't enough to protect you from entities ignorant of fear and indifferent to wastefulness.
Thanks for the astute observation.
It's not clear that their goal is deterrence. If they are transparent with heuristics for ranking vendors, it could provide fabled market-based incentives for a vendor race to the top, narrowing the security gap between "best" and "worst" vendors. If highly rated vendors advertise their achievement, buyers could factor the rating into purchase decisions. The heuristics would need to evolve as the floor of vendors' security practices is raised.
The bigger problem is what happens if the market-based approach fails? Will regulators step in for certain classes of software? Regulators are less likely to understand Turing-completeness.
Considering the topic of this thread, the debate is fundamentally about automation and machines operating with the authority to make decisions.
There are at least four broad, complicated areas of effect that need to be addressed to begin grasp the scope of what's at stake here.
This is the entry point that puts lives in harm's way. Software can do whatever it wants to a terabyte of RAM, but there's no outcome, if that RAM isn't driving a real world system. This is the point at which the tires spin and the steering wheel turns. Machine Learning is mostly about probability and statistical liklihood, so good luck unit testing that.1. Hardware interfaces for software control.
This is where, after the system comes online, checks its supplies, and determines it has everything it needs to attempt an action, it decides to do something. After Windows boots up, the autonomous entity sits at the keyboard, checks for disk space, battery life, time of day, gas in the tank, oil pressure, and anti-feeeze, and then considers options for where to go for a drive. Perhaps the corporation that owns the entity configures a default strategy of taxi service, before hauling cargo from a mine to a warehouse, or trash pickup assistance. Taxi service reduces wear on the vehicle chasis, so the entity opts to drive as a taxi. It checks in with a ride-hailing service, and offers it's resources to the pool, and queues up for an assignment. Since we're talking about an autonomous system, there's a division of activity between deciding to try something, versus carrying out the behavior that effects outcomes. This phase is simply parsing resources and allocating them, but not using them to do a thing. Budgeting capacity is a different form of autonomy than performing specialized work.2. Decision making and autonomy.
This is where the car negotiates the entire trip from garage to destination and back, without damaging property, harming others, getting stranded, and hopefully turning a profit, or at least fulfilling it's role as an appliance with a warranty (90 days, one year, 5 years or 100,000 miles?). This may be vertically dependent on multiple sub-products originating from multiple corporations, all interacting to produce the talent or skill exhibited by the system. Maybe it's a smart refrigerator, and it's defaulted to always keep a gallon of fresh milk available, but you never drink milk, and you don't know how to turn the feature off. Who's to blame for all that spilt milk? Maybe your autonomous taxi can deliver milk to the houses without smart refrigerators. Who's to say it's a problem or not? Maybe the self driving vehicle wouldn't have run over than pedestrian if you could have convinced the fridge to stop replenishing the milk. Which brings us to...3. Succeeding at a discrete task.
Yeah, great. Your car can drive itself around 24/7 earning a passive income for you while you surf the internet, and bandy about more great ideas. You and 20 million other people are all doing the same thing. This has a transformative effect in aggregate. Your car works as advertised. No bugs. No accidents. No injuries. A perfect driving record. But there could be deeper sociological ramifications to the introduction of such technology that is unanticipated, and thus hard to envision. Did Facebook drastically augment the outcome of an election, if not willfully then perhaps by sin of omission? Would we have imagined such a conversation in 1999, before the dot com bust? Why didn't AOL produce this sort of acrimony? So, if uber's gig economy and side hustle isn't disruptive, while we work out the kinks that run over pedestrians, what about the decentralized mastodon for self driving side hustles? What about self driving transoceanic zeppelins, over international waters, than launch their own weather satellites? Who will stop them from polluting low earth orbit?4. Economic realities, sociological effects, propaganda and psychological operations.Some of this isn't about software bugs. Some of it is behavior and psychology.
That comment about admission to and completion of college leads to simplistic conclusions comes across as unfair. Pretty much every educated person went through such a sequence to get through college (others are educated without college, but they are tiny minority...). And plenty of us are able to understand the world has nuance.
ON your latter point, I agree that there needs to be a degree of responsibility to some software dev, but it's a complex area and I can't really organize 'blame'. What's the blame I should get for writing a better webserver that some dicator uses to serve up his orders and have people killed? Compare that to the firmware of a phone programmer, and the person who teaches a robot how to determine if someone is killed by a weapon. It's too easy to look to that last person and say without thinking that they are the bad one.
> "The notion that software engineers are not responsible for things that go wrong will be put to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve."
That sounds like a really hard problem to solve. Not only would you need to have an audit trail for the history of changes to the code, you'd also have to figure out a way for third parties to audit the code and make sure they're looking at the same data that is being released by the project.
Additionally, reacting to arbitrary changes in the environment probably requires more resources than even a multinational corporation can provide. So you'd need a way for third parties to be able to make their own changes to the software, somehow add them to the codebase without creating too much administrative overhead for the core team, and audit those changes in an automated fashion so that they don't create thousands of new exploits.
And how would you even manage the sum total of all these Frankenstein versions of the original software with the changed versions? How would all these geographically disparate groups of programmers even communicate?
We obviously need funding for a lot more CS professors to come up with solutions to these issues.
The Anderson article mentions research into extended maintenance practices:
"As we build more complex artifacts, which last longer and are more safety critical, the long-term maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost. On the technical side, at present it is hard to patch even five-year-old software. The toolchain usually will not compile on a modern platform, leaving options such as keeping the original development environment of computers and test rigs, but not connecting it to the Internet. Could we develop on virtual platforms that would support multiple versions?"
ELC 2016: Approaches to Ultra-Long Software Maintenance: https://elinux.org/images/f/fb/Approaches_to_Ultra-Long_Soft...
ELC 2017: Long-Term Maintenance, or How to (Mis-)Manage Embedded Systems for 10+ Years: https://www.linux.com/news/event/ELCE/2017/long-term-embedde...
Thanks for those links. Even with those recommendations, I think we honestly have no idea how to maintain software systems for 20+ years. Shame, because almost every other kind of industrial system we produced up until about 30 years ago _did_ in fact have the ability to be maintained for decades. Look at the B52 (https://en.wikipedia.org/wiki/Boeing_B-52_Stratofortress). Still going strong 60+ years after its first flight. Designed to last. Individual ones were built to last. Sure, it's seen upgrades over the years, but the core was just so well thought out and so _simple_.
We might never be able to build systems like that again. Simple is key, and we just can't help ourselves.
What mainframe systems do sounds pretty close the quote. They can still run decades-old software, even if the hardware platform underneath now looks completely different, since they can emulate everything from the old system, or have newer interpreters for the same old languages.
> That sounds like a really hard problem to solve.
Yes, but we still have to solve it… or, at the very least, make a good effort to improve the situation.
Biology and chemistry already have systems around them to help limit any damage from, for example, smallpox research or mishandling chlorine triflouride, and they manage this without needing everyone to wear safety goggles when mixing yeast and water.
Why would it be the engineers and not business leadership? If BP has an oil spill do we blame their engineers?
Sure blame software engineers. I just do what my boss tells me to do. My boss is not a software engineer.
I enjoy the... irony? of visiting the site with a set of ad block and privacy related extensions and seeing a set of 'Sorry' messages that I can't see the trailer because of my privacy settings.
direct link: https://vimeo.com/263108265
I was hoping for a bit more in-depth material - allowing the experts to explore their topics a bit more and potentially talking about potential solutions? Where is the call to action for the viewer? What now?
It's good for non-technical folks to watch, but nothing really new since the 'Humans need not apply' 15 min documentary [0]
[0] https://www.youtube.com/watch?v=7Pq-S557XQU (2014)
Edit: added link to humans need not apply
I watched the first 20 minutes or so. What I really miss here is some naration. A documentary that is tied together by quotes from interviews and flashy stockfootage is hard for me to follow.
This is exactly how I felt. The whole thing is like a stream of consciousness experience without any central story or narration. I couldn't take it. It's like one big infomercial or something
I agree. 10 min was all I could bare. It's just a big mix of sound bites.
I find it eery that a documentary that isn't telling people what to feel or think or do is criticized for being incomprehensible and boring.
Maybe we're already far more doomed than people realize.
A narrator doesn't tell you "what to feel or think", they piece together what you're looking at and go deeper into concepts, whereas this quote after quote stream doesn't go very deep into anything.
In fact, I'd say it is currently telling you what to feel and nothing more, because most of the quotes in the first 10 minutes are derivations of "technology is scary, you should be scared".
This doc definitely gave me the heebie jeebies. But I don't think the point is necessarily to go deeper into these problems. Most will watch this and finally gain the intuitive sense of how intertwined intelligent machines are in our lives. It isn't just smartphones and laptops, but everything from the military to health, and in between. It gives a good light overview of what professionals are thinking. Most people are not afraid of AI, but I think we should be, to the point that we start making changes to how we develop it.
Exactly - really wanted to like and see it since Musk was promoting it.
Especially the quotes from random kids. They already had vapid quotes from adults, and the kid quotes added nothing.
Half an hour in I'm left thinking: "this is a great trailer, but where's the documentary?"
Very strongly recommended. The interviewed subjects are largely experts in AI, and many are concerned.
The film is likely a bit long to trigger much by way of discussion here, though that's not always bad.
Word is that free play / download is this weekend only. Grab a copy via yt-download if you can't watch immediately.
That's weird, because I just got to the part in Pinker's Enlightenment Now where he says the large majority of experts in both artificial and human intelligence are not concerned.
Who is misrepresenting the expert consensus? Or are they both misrepresenting the fact that there is a consensus?
"""Things AI researchers agree on:
- that documentary was pretty unhelpful
- Terminator images are usually inappropriate
- AlphaGo Zero (if not other Alpha•s) was pretty cool"""
(https://twitter.com/Miles_Brundage/status/983063456424308736)
I don't have any survey results to point to, but my impression from following AI researchers from industry/academia is:
* Modern methods are many leaps of understanding behind anything resembling AGI, so any concern about research groups developing a sentient computer program behind close doors with no warning is probably misplaced.
* AI/ML causing large-scale unemployment will be a serious issue eventually, but it's difficult to make a strong case that's it's happening right now.
* The ability to monitor and manipulate individuals using ML/AI is dangerous, doesn't depend on particularly advanced technology, and is already being used by corporations and governments right now. It's a lot easier to get the public worried about terminator-style robots than about (what appear to be) simple advances in advertising or law enforcement.
* There's a strong incentive for those selling "AI technology" to oversell its ability to generalize/improve automatically. To quote Elon (of all people), "It's a mistake to think that technology automatically improves. It does not automatically improve. It only improves if a lot of people work hard to make it better."–this applies to "deep neural networks" as much as anything else.
Given that it's the experts themselves talking in Do You Trust This Computer, and Pinker describing experts in his own work, I'd give stronger credit to the former, more direct, evidence. That is: letting the experts express their views directly.
There remains the possibility that there is a difference between "the set of top experts in the field included in this documentary" and "a survey or statement from the larger share of practitioners within the field". This ... runs into a few additional problems.
Documentaries can (and with some frequency are) selective and nonrepresentational. That is, a documentary's goal is not to Reveal The Statistical Truth, but To Tell a Specific Story. Documentaries are driven by narrative, not random sampling and statistical analysis.
That said, stats can lie and crafted stories can be quite useful.
A large-scale sampling of opinion is also ... largely just that. A large-scale sampling of opinion. Even if that's expert opinion. It is not the same as arriving at a truth (unless the truth you're seeking to arrive at is "what is the typical or general opinion held on some matter by some population?").
The views and concerns of top practitioners within a field is often highly significant. It may still be inaccurate (or be inaccurately portrayed). But these are the people who've worked with a thing for a long time, who've seen what does and doesn't work, what is or isn't hyped. And you'll often find exceptionally strong critics of various fields or technologies.
Leading atomic scientists in the U.S. and USSR both came to oppose nuclear weapons: Robert J. Oppenheimer and Andrei Sakharov. The Father of the Nuclear Navy, USN Admiral Hyman Rickover, came to oppose nuclear power. There are numerous technologists who are now questioning the goal of universal connectivity (myself included, though my qualifications are expressly not available for or any basis for credence). And in the field of AI, there are numerous significant, long-term, and leading researchers who are raising profound questions of advisability and risk.
Or, you know, you could go with the Good Humour guy, Pinker.
I find Pinker an unreliably and highly biased narrator.
Honestly, I personally don't believe that regulations would solve or diminish the potential issues related to AI. We will be constantly challenged by people/machines trying to dominate others, and for that reason, I don't see regulations being the solution, but education instead. I think that people should better understand the trade-offs that AI can bring to our lives and act based on that. Therefore, democratizing AI and educating people about it should be a good starting point for this problem.
Also, I tend to agree with Mark Cuban[1] about the importance of philosophy degree in the near future. There will be so many issues to be assessed that such degree would bring much value to the society.
[1] https://www.cnbc.com/2018/02/20/mark-cuban-philosophy-degree...
I don't really get the argument for regulation. This is not nuclear material (something relatively easy to control). This is computers, something most 8yr olds in the western world at least have access to. If you regulate it in the USA or the USA+Europe will it be regulated in Russia? China? Can you even regulate it in the west if you want to without confiscating all computers?
I'm not saying we shouldn't try to make friendly AI (One of Musk's initiatives), rather I'm just saying I don't see how it's possible to remotely regulate this.
It looks good with editing, graphics stock footage and such but not really for HN demographic. Some of the AI commentaries are also very exaggerated in places.
So I bailed after the first X minutes. Why cant we just unplug the power and/or the network cable and move on from this FUD?
Because without intelligent computers, the world we've built for ourselves would just... stop working. They have brought us medical advancements not possible 20 years ago. So we would pull the plug and kill many people. Not only that, but countries are competing against countries with their technology. So if America pulls the plug, everyone else is far more advanced and you have lost your military and political power. There is no "pulling the plug" unless the world is willing to pull the plug and willing to deal with what amounts to another dark age-- in which case, technology such as we see today is likely to develop again.
This tech is almost inevitable. We are creator beings. We want to make something that matches our intelligence-- and we always have. We've made pantheons of Gods to match and overcome us, we've made these autonomous beings in the stories we tell and in the dolls we played with when we were kids. The problem is that we don't know what machine intelligence may do when we let it out in the world. If they can learn the tasks we want them to learn, can they then learn how to learn a task we didn't assign them to? What kind of task would that be? How would we control it?
There may be no controlling it. But if we started focusing on this problem right now we may be able to figure it out. Instead, most everyone in AI is working to pump out as many "smart" things as possible, trying to develop better learning algorithms, get AI to become as human as possible, trying to make machines that behave like us but better, without paying attention to the future costs.
Maybe for the first time in human history we can learn without making the mistake.
One of the clips says over 60% of stock market trades are made automatically by computer, so my first thought is that this would be a bit like telling someone with 2 prosthetic limbs that if something goes wrong with them, he can just remove them both and get on with his life.
Finish watching and you might find out.
Could you tell us?
Don't waste your time. It's fear mongering. This would be fine if it had any suggestions at all about how to avoid the horrors it imagines, but its argument boils down to completely unspecified 'regulation.'
It's just a sensationalized docu. Take it with a grain of salt. I enjoyed it though.