Settings

Theme

Ask HN: How to explain that ML approach is not 100% perfect?

5 points by masterchief1s1k 4 years ago · 4 comments · 1 min read

Reader

Hi, I am having problem explaining to my team members and employer the efficiency of my ML models.

People in software companies tend to think that since machines can perform perfect logical and mathematical operations, anything run within them will also inherit this property, including AI.

Currently no matter how hard I try to improve my AI model (generating more data, applying different augmentation, model architecture,..etc) they will still trying to find input data that proved my model is not "generalized" enough.

simondebbarma 4 years ago

This is a very good visual introduction to Machine Learning created by the team at R2D3 and helps even non-tech people understand how ML works and that training a model is balancing bias and variance error rates. Part 1[0] goes over what decision trees are, and Part 2[1] goes over Bias-Variance Tradeoff.

[0] https://r2d3.us/visual-intro-to-machine-learning-part-1/

[1] https://r2d3.us/visual-intro-to-machine-learning-part-2/

  • masterchief1s1kOP 4 years ago

    Thanks for the link. I think it's very helpful, I was able to explain more complex algorithm with visual. Love the explanation on bias-variance and overfitting, I don't think I could get my point across without something like this to visualize my mathematical standpoint.

jstx1 4 years ago

What's there to explain? Tell people that it isn't perfect, and they seem to know this already.

Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with? (Okay, don't actually do this)

Who is criticising your work? Manager/colleague/stakeholder/C-level executive? Does it matter if they are criticising it? What happens if you just shrug and keep doing what you're doing?

The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.

  • masterchief1s1kOP 4 years ago

    > Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with?

    This is what I have been asked to do the whole time. And yes, like you said it's not very efficient. It would just make them think of the incorrect result as something I need to "improve".

    Other team members did shrug it off and simply add those incorrect example as label for the new iteration of model, which did the job and receive high praise. But in my opinion, that just seem like poor AI work ethic.

    > The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.

    Thanks, after thinking about it for a whole day, I guess the correct action is to improve my communication skill, and present the pros and cons of my approach better. I just found out that it is very hard to explain machine learning term without proper visualization.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection