The Movable Feast Machine
movablefeastmachine.orgMy bedtime reading recently has been Propagator Networks[1][2][3] ... and I keep daydreaming of a resource management system just like Dave describes in this video... I do believe "mostly correct" computing is going to play a big role in the future.
[1] http://web.mit.edu/~axch/www/phd-thesis.pdf [2] http://web.mit.edu/~axch/www/art.pdf [3] http://groups.csail.mit.edu/mac/users/gjs/propagators/
Maybe I've been living under a rock, but this is the first time I've seen robust-first computing and this is pretty fascinating. I'm imagining that the application of this to a physical scenario would involve, say, a grid of very low-power CPUs that could each be "taken over" by an atom.
I can see how this affords some nice properties for robustness. I'm less convinced about security.
I can imagine essentially a grid with a moat of computationally very difficult structures to send commands through without help from inside the moat. I can see how you'd, like, send a password hash through and something would come out and "get you" for lack of a better explanation.
But that seems like it would be about as secure as a real moat.... Which is to say, it's still only as safe as your expectations about your opponent's capabilities are accurate. So, you might have some ideas about how malware must do computation, and you can have your little computation immune system working to make that stochastically impossible. But then the malware writers will figure out weirder ways to do computation. Like maybe they give up on penetrating the moat and have the immune system do the nefarious computation. Or whatever. My point is that you still end up in an arms race, except it's an arms race within a complex system so it's even more difficult to have an understanding of your opponent's capabilities than it is inside a von neumann machine.
It'd love to see the same example with a regulator generating incorrectly programmed sorters. How does the system handle a programmer bug that inverted the sort direction?
This brings up a really interesting point. If the right mutations happen, we could get a machine that eats other machines and reproduces indefinitely. In other words, X-Rays can give this machine cancer.
then you'd debug it, or program debuggers to flit around looking for programmer error then address it
the system is clear about its capabilities regarding 'correctness'
which seems to imply the programmer's job is to optimise with those built in inaccuracies
but what those inaccuracies afford is what ackley is calling robustness and indefinite scalability
ackley addresses your question directly with a sorting comparison(o) from a later video
the graph fails to exhibit where maxwell's demon(i) horde sort would express itself on the graph, but sorting corruption is addressed in the paper(ii)
unintended performance appears to be the reason for the advent of this systemThe demon horde sort’s performance may be just adequate, by that measure, but its robustness seems quite impressive. Figure 23 shows results of one experiment in which we randomly corrupted site memory with simulated bit errors at a range of probabilities. Each error occurrence selects a random site and then flips from one to eight of its 64 atomic bits. We can see that while channel length helps performance, it does not help robustness against this system perturbation— but the system is strikingly robust anyway, tolerating upward of 10 multibit corruptions per million events with essentially no visible performance degradation, regardless of channel length. Above about 50 errors/Mevent the system reliably falls apart—and the pathology appears to run a reliable course..but going further as to deal with unintended performance of hardware
if you have a multi core system running an incorrectly programmed sort and one core fails the whole thing shuts down, but an incorrectly programmed sorter in the demon horde will keep functioning even with failed cores, affording the opportunity to adjust while performing
(o) https://youtu.be/7hwO8Q_TyCA?t=688
(i) https://en.wikipedia.org/wiki/Maxwell%27s_demon
(ii) http://comjnl.oxfordjournals.org/content/56/12/1450.full.pdf...
I wonder how the Demon Horde Sort would work if the bounds were not in a nice grid like that? Is it possible to put inputs on the right and outputs on the left? What happens if the grid changes size?
I think the idea is that the bounds are finite but not fixed. The architect of the system could hot-plug in a bunch of nodes to any direction and the playing field would instantly expand. However, at any given time, there are a knowable, fixed number of nodes.
the 'grid' changing bounds is the intended mode of existence
in regard to demon horde the paper(o) states 'channel size increases performance'
but this should be uncontroversial
more resources give better results
it will be an optimisation problem to determine necessary resource alocation for desired results
the shape of the simulation was stated(i) to be a simplified representation of the functionality
the paper discusses 'a movable membrane whose contents cannot diffuse out', the figure is an almost organic shape, much more adaptable than the rectangular simulation demonstration
(o) comjnl.oxfordjournals.org/content/56/12/1450.full.pdf+htmlb
There's also a more recent 'router' demo(1) that -- though still rigid and rectangular -- does build its own boundary rather than relying on the grid boundary.
How well does this scale up? The example in the video seems toy-scale. I can see a few dimensions of potential scaling:
* larger numbers -- for example large numbers of tiny processor / memory cells on a single chip (thousands to millions?)
* more dimensions, either symmetric (lattice) or asymmetric (hyper-pyramid that gets more sparse as you go up)
* more complex cells -- more memory, processor power, bandwidth
* specialization -- heterogeneous cells