Settings

Theme

Ask HN: How to cheaply use a vector DB to detect anomalies in logs at 1TB / day

3 points by arconis987 2 years ago · 3 comments · 1 min read

Reader

I’m interested in playing with vector databases to detect interesting anomalies in a large volume of logs, like 1TB / day.

Is it reasonable to attempt to generate embeddings for every log event that hits the system? At 1TB/day, it’s like 1B log events per day, over 10k per second.

Would I just have to sample some tiny percentage of log events to generate embeddings for?

The volume feels too high, but I’m curious if others do this successfully. I want this to be reasonably cheap, like less than 1 cent per million log events.

Twitter seems to be doing something like this for all tweets at much higher volume. But I don’t want to spend too much money :)

SushiHippie 2 years ago

Maybe have a look at what netdata does, maybe not 1 to 1 applicable to your use case, but I've used netdata for monitoring my own servers which ingests thousands of datapoints per second and the anomaly detection seems to work.

https://learn.netdata.cloud/docs/ml-and-troubleshooting/mach...

gwnywg 2 years ago

Out of curiosity, are all logs coming through single pipe or is this aggregate of multiple sources and you could apply something before aggregation?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection