Google’s AI translation tool seems to have invented its own secret language
techcrunch.comOn the downside, it's just a loop of Bender saying, "Kill all humans." Kidding aside though, the last bits of the article seem to spell out a future in which, for better or worse, we don't really understand what's under the hood, completely. What exactly is the implication, crazy extremes aside, of systems which essentially build their own black boxes?
A few days ago I found an Asimov story, The Last Question[1] on the comments on some other HN post. The story talks about computers which are far more advanced than today's and no human can completely understand their inner workings.
I think, even today, a modern microprocessor is out of the reach of any single human to fully comprehend. This is definitely the case for the extremely vast field of computer science.
Maybe the future of computing and artificial intelligence are systems that are in our capability to build but beyond our understanding of why they do what they do!
Is it a matter of the interplay between algorithm and mix of data being too complex for a human to know and predict the state? All outputs of the system are still theoretically predictable and reproducible (given the time/resources to reproduce the inputs), right?
This is flagged as a [dupe]; it would be useful to have automated links to the previous discussion in HN, maybe in the [dupe] tag.
I'll now go search for the previous discussion. The "past" link does not yield any results, and even searching for the TC URL does not find it.
It's a discussion of the google blog post linked in the techcrunch article (which arguably just reports on that article, and thus maybe shouldn't have been submitted): https://news.ycombinator.com/item?id=13018201
Yes, exactly! Thanks!
But is it coördinate or compound multilingualism?