Can we test it? Yes, was can [video]

youtube.com

78 points by zdw 5 days ago


cloogshicer - 2 days ago

I think what people really mean when they say "This can't be tested" is:

"The cost of writing these tests outweighs the benefit", which often is a valid argument, especially if you have to do major refactors that make the system overall more difficult to understand.

I do not agree with test zealots that argue that a more testable system is always also easier to understand, my experience has been the opposite.

Of course there are cases where this is still worth the trade-off, but it requires careful consideration.

j_w - 2 days ago

My takeaway from this is that when you have a system or feature that "can't be tested" that you should try to isolate the "untestable" portions to increase what you can test.

The "untestable" portions of a code base often gobble up perfectly testable functionality, growing the problem. Write interfaces for those portions so you can mock them.

testthetest - 2 days ago

The ROI on unit tests, as well as the answer to "Can we test it?" is changing fast in the age of AI.

1. AI is making unit tests nearly free. It's a no-brainer to ask Copilot/Cursor/insert-your-tool-here to include tests with your code. The bonus is that it forces better habits like dependency injection just to make the AI's job possible. This craters the "cost" side of the equation for basic coverage.

2. At the same time, software is increasingly complex: a system of a frontend, backend, 3rd-party APIs, mobile clients, etc. A million passing unit tests and 100% test coverage mean nothing in a world when a tiny contract change breaks the whole app. In our experience the thing that gives us the most confidence is black-box, end-to-end testing that tests things exactly as a real user would see them.

devjab - 2 days ago

I don't think anyone ever regretted doing a test but runtime assertions are so much better because they deal with issues when they happen, rather than trying to predict potential failures. This was probably forgotten as interpreted languages became more popular, but hopefully we're going to see a return to less "developer focused" ways of dealing with errors, so that your pacemaker doesn't stop because someone forgot to catch the exception they didn't think could happen with their 100% coverage.

webdevver - 2 days ago

i thought that's what customers were for?

borg16 - a day ago

shout out to "A tribe called Quest" for the title (my guess)

diggan - 2 days ago

Seems like blog spam, the actual content (presentation/talk) is at: https://www.youtube.com/watch?v=MqC3tudPH6w

aspenmayer - 2 days ago

When this happens, how do you determine who gets the karma? Is it right and just and logical for OP to get karma for submitting a URL that HN readers didn't visit after being updated by mods, or for OP to get karma previously for a URL that was deemed lacking with regards to the guidelines? It seems like they should get one or the other, but not both.

Just some food for thought. The reason I mention it, is that a person who has been commented upon by me previously for using scripts submitted this before OP, and if precedent holds, they should get the karma, not OP. But they have been commented upon by mods for having used scripts, but somehow haven't been banned for doing so, because dang has supposedly interacted with them/spoken with them, as if that could justify botting. But I digress.

To wit:

https://news.ycombinator.com/item?id=44449650

pinoy420 - 2 days ago

[dead]

somewhereoutth - 2 days ago

My feeling on testing:

- If it is used by a machine, then it can be tested by a machine.

- If it is used by a human, then it must be tested by a human.