Energy consumption comparison in machine learning platforms
neuraldesigner.comAlright, so the more I'm looking into this website, the weirder it becomes. Never mind even talking about the pricing model.
Clicking on their blog, the first entry is
> "How are variables in the dataset for machine learning?"[0]
That doesn't even seem like a valid English sentence to me.
Searching the sentences from the text will send you to various sources from which they were taken without being given credit.
In fact you can find plenty of sites who are seemingly recycling the the same sentences used in this blog. It's pretty bizarre.
> TF [...] The final mean squared error is 0.0003.
> Neural Designer [...] reaches a mean squared error of 0.023.
> The following table summarizes the the[sic] most important metrics that the two machine learning platforms yielded .
[omits MSE]
They should train both to the same loss and then compare.
I'd like to know why there is a difference in the final loss at all. If the two networks had the same architecture, used the same loss function, and had random uniform initialization, then 1000 epochs should have them converging on very similar final loss values. Especially if one was able to converge to 3e-4.