Settings

Theme

Energy consumption comparison in machine learning platforms

neuraldesigner.com

11 points by adrianomartins 3 years ago · 4 comments (3 loaded)

Reader

sva_ 3 years ago

Alright, so the more I'm looking into this website, the weirder it becomes. Never mind even talking about the pricing model.

Clicking on their blog, the first entry is

> "How are variables in the dataset for machine learning?"[0]

That doesn't even seem like a valid English sentence to me.

Searching the sentences from the text will send you to various sources from which they were taken without being given credit.

In fact you can find plenty of sites who are seemingly recycling the the same sentences used in this blog. It's pretty bizarre.

[0] https://www.neuraldesigner.com/blog/type-uses-variables

sva_ 3 years ago

> TF [...] The final mean squared error is 0.0003.

> Neural Designer [...] reaches a mean squared error of 0.023.

> The following table summarizes the the[sic] most important metrics that the two machine learning platforms yielded .

[omits MSE]

They should train both to the same loss and then compare.

  • BobbyJo 3 years ago

    I'd like to know why there is a difference in the final loss at all. If the two networks had the same architecture, used the same loss function, and had random uniform initialization, then 1000 epochs should have them converging on very similar final loss values. Especially if one was able to converge to 3e-4.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection