Problem solving often a matter of cooking up an appropriate Markov chain (2007)

math.uchicago.edu

246 points by Alifatisk a day ago


satvikpendem - 21 hours ago

I assume OP watched this excellent Veritasium video about Markov and his chains [0] and posted this article which was referenced in that video.

[0] https://youtu.be/KZeIEiBrT_w

postit - 21 hours ago

Markov Chains are a gateway drug to more advanced probabilistic graphic models which are worth exploring. I still remember working throughout Koller&Friedman cover to cover as one of the best learning experiences I’ve ever had.

AnotherGoodName - 17 hours ago

I feel like we need a video on Dynamic Markov Chains. It's a method to create a markov chain from data. It's used in all the highest compression winners in the Hutter Prize (a competition to compress data the most).

mindcrime - 21 hours ago

Heh, somebody watched that same Veritasium video about Markov Chains and Monte Carlo methods. Great video; lots of interesting historical stuff in there that I didn't know (like the feud between Markov and Nekrasov).

amelius - 20 hours ago

If you're holding a hammer every problem looks like a nail ...

nyeah - 19 hours ago

I mean, often, sure. Also problem solving is often a matter of making a spreadsheet full of +, -, *, / operations. Problem solving is often a matter of counting permutations and combinations. It's often a matter of setting up the right system of ordinary differential equations. It's often a matter of setting up the right linear algebra problem.

It's often a matter of asking the right person what technique works. It's often a matter of making a measurement before getting lost in the math. It's often a matter of finding the right paper in the literature.

1vuio0pswjnm7 - 6 hours ago

Note to file: No one is complaining about the absence of [PDF] from the title. Possible that time of submission is a factor.

For example, https://news.ycombinator.com/item?id=44574033

theturtletalks - 18 hours ago

The Veritasium video brought up an interesting point about how LLMs, if trained too much on their own content, will fall victim to the Markov Chain and just repeat the same thing over and over.

Is this still possible with the latest models being trained on synthetic data? And if it possible, what would that one phrase be?

Pamar - 20 hours ago

I am on the move so I cannot check the video (but I did skim the pdf). Is there any chance to see an example of this technique? Just a toy/trivial example would be great, TIA!

naasking - 20 hours ago

Unless the problem is "quantum mechanics", then that reduces to non-Markovian processes with unistochastic laws, and ironically, this makes for a causally local QM:

https://arxiv.org/abs/2402.16935v1

stevenAthompson - 21 hours ago

Here's a direct link to a PDF.

http://math.uchicago.edu/~shmuel/Network-course-readings/Mar...

tantalor - 21 hours ago

(2007)