New proof dramatically compresses space needed for computation

scientificamerican.com

190 points by baruchel 8 days ago


zerof1l - 5 days ago

Here's the gist:

For nearly 50 years theorists believed that if solving a problem takes t steps, it should also need roughly t bits of memory: 100 steps - 100bits. To be exact t/log(t).

Ryan Williams found that any problem solvable in time t needs only about sqrt(t) bits of memory: a 100-step computation could be compressed and solved with something on the order of 10 bits.

cyrillite - 5 days ago

One of my favourite sci comms YouTubers explained this in great detail https://youtu.be/8JuWdXrCmWg

Highly recommend

JohnKemeny - 5 days ago

Related:

For algorithms, a little memory outweighs a lot of time. 343 points, 139 comments, 39 days ago. https://news.ycombinator.com/item?id=44055347

You need much less memory than time. 126 points, 11 comments, 22 days ago. https://news.ycombinator.com/item?id=44212855

trueismywork - 5 days ago

https://arxiv.org/abs/2502.17779

Link to paper

mikewarot - 5 days ago

I get that this is an interesting theoretical proof. I'm more interested in the inverse, trading memory for time, to make things faster, even if they take more space. It seems to me the halting problem is almost a proof the inverse of this is impossible.

Memoization is likely the best you can do.

gwbas1c - 4 days ago

One thing that's surprised me throughout my career is just how inefficient most of the code that I've inherited is. I suspect we could do a lot more with the hardware we have if we simply became better at programming.

(Perhaps more AI coaching could help?)

Gehinnn - 2 days ago

The impossible chess board problem must have something to do with the idea of solving tree eval with little memory (https://youtu.be/wTJI_WuZSwE?si=lgTc65RhXQesKchR)! When the chess board is random, it feels impossible to add additional information. However, by choosing which cell to flip and knowing that you followed a certain operation, you can encode the position information in the random data, without needing more cells.

krackers - 4 days ago

Related: https://news.ycombinator.com/item?id=44212855 and https://news.ycombinator.com/item?id=44055347

utopcell - 4 days ago

[ made this a top-level comment as I don't see it mentioned in other comments: ]

The result doesn't actually apply to _all_ algorithms, only oblivious ones. Oblivious algorithms are ones where step i does not depend on decisions made in steps j < i. Almost all useful algorithms are non-oblivious, with some notable exceptions [1]. This work is a step forward towards proving the result for the general case, with little practical usage (and that's perfectly okay).

[1] https://en.wikipedia.org/wiki/Sorting_network

gweinberg - 4 days ago

I don't understand what they are saying at all. If I have a computation that requires going through a loop n times, why should the memory requirements increase with n at all?

kristianp - 5 days ago

This seems very theoretical, just a lower bound on space required, without talking about what is being computed. Does it have any import on real algorithms?

baruchel - 8 days ago

Without paywall: https://www.removepaywall.com/search?url=https://www.scienti...

- 4 days ago
[deleted]
bluenose69 - 5 days ago

Here's a quote from the SciAm article: "Technically, that equation was t/log(t), but for the numbers involved log(t) is typically negligibly small."

Huh?

snickerbockers - 4 days ago

>(Technically, that equation was t/log(t), but for the numbers involved log(t) is typically negligibly small.)

My dude, that is NOT how rational numbers work.

aaron695 - 4 days ago

[dead]