IBM Watson Health slashes workforce
massdevice.comI'm an MD, and I dabble in programming and AI/DL. While I don't really care much about Watson in particular, the pervasiveness of the "enter healthcare = death" equation is worrisome to me. And it should worry you, too.
There's a lot of potential for good applications of this type of tech in healthcare, but the barriers to actually getting it off the ground strangle it in the crib.
I work for a medical software company which evaluated Watson for use in our diagnostic product.
It should be seen as a Good Thing that medical is a Serious Field where Non-Serious Ideas get run through the ringer and fail.
And that's what happened with Watson.
The word from our engineers was that Watson was designed to sell commercials, not be a useful tool in a problem space like differential diagnosis or drug reactions.
I think this is good. Financial has the same reputation. You can't just use agile to bust out a minimum viable medical or financial product. You will be chewed up and spit out. And rightfully so. Because when it comes to our health, or our money and our investments, we don't have tolerance for failure.
OK, my social network double posted my communication, annoying but it's just a communication. However "my financial tool just double posted a transaction", or "my prescriptions tool just doubled the dose of a medicine" -- these are not OK states at all.
I wish Watson lived up to the hype, but sadly, our experience was that it does not. So it failing out of Healthcare isn't a surprise to us, nor is it a bad thing.
> The word from our engineers was that Watson was designed to sell commercials, not be a useful tool in a problem space
This is the number one thing to remember about Watson. I've worked with teams within IBM that spent years and thousands upon thousands of man-hours trying to get it to be useful in a fairly well-defined, constrained use-case. Eventually they bailed and went to what was essentially a simple decision tree. It hits it out of the park on the sexy buzz-words, but I'm ready to curse the whole field for giving customers wildly inflated expectations.
I worked on a project that did a little Watson work as well. Same result as you described.
The shame is decision tree based expert systems can be pretty powerful, but the tools these guys were using were not that great.
I am a nerdy physician desperate to work with somebody on decision tree based expert systems. Are there a bunch of existing projects that I’m just missing?
Well, I just started an open source project to gather the data for such systems. It is starting as a database/graph of symptoms and conditions. But the vision is bigger than that.
https://github.com/gafmgafm/med
There are similar products but they are not open source, such as: http://apimedic.com/ and http://www.diseasesdatabase.com/
I've been doing a lot of work lately with automated Abductive Inference, and medical diagnosis has been cited as one of the key uses of same. Perhaps there could be some opportunity to collaborate (everything I do is open source as well, btw). If you guys would like to talk, feel free to hit me up at prhodes@fogbeam.com
Theranos is a counterexample. Probably would have been outed as fraudulent if they tried to enter a more safety-critical space (ventilators, pumps, injectors, etc.), but still a bad sign for medical diagnostic industry that it got so much funding and so far along, inking huge deals before collapsing. Perhaps some good will come if it makes others in the space more cautious in the short run
Next industry to keep an eye on (many are already) is automotive. There are a lot of vendors trying to put a lot of software/connectivity into the next generation of cars. Lots of regulation but I will opt for a dumb car over new smart stuff any day.
Same with a lot of different traditionally embedded systems that are now being connected to the internet
No, Theranos started to fail as soon as anyone with real medical expertise without a stake in the company started to probe it.
Theranos was extremely successful at fleecing VCs and embarrassing political board members.
It's a good thing that it got so far, because it proves just how bad VCs are at evaluating successful companies and how much of the funding game is who you know.
Arguably, the big deals and optimism indicate that there is a lot of pent up demand for successful things, and people are willing to take some risk to get solutions. It’s not all doom and gloom, solutions just have to work.
Conversely, I read Theranos's progress as evidence of cynical bets that the government/military could likely be directed into pumping billions of USD into the company regardless of its merit.
You could certainly draw that conclusion from its original board. I did as well. Still, I think the point stands that people want healthcare to work, and will pay for it. It just so happens that it takes a lot of effort to improve the system since what we have is pretty good. (Not to say it doesn’t have problems)
Oh, I agree if it doesn't work it should be tossed aside. My concern is that given the potential usefulness demonstrated in so many other domains, I suspect that many of the tech failures in healthcare are due to the fact that it's too expensive to take the time to get it right.
I think this might also be supported by looking at some of the better funded / successfully exited health tech startups.
I’m thinking of things like Flatiron Health, BenevolentAI, Quartet and Spreemo, and surely many more.
At heart these companies seem to shy away from being “really about” medicine, and try to be a more watered down data company, looking at health records, social media, and other data to provide recommendation, service matching, comorbidity analysis, hospitalist tools, cost tools, assistance program allocation tools.
I’ve been very interested in ML applications in health, but a lot of the businesses that seem capable of getting funding seem like they are super light / elementary on the statistical modeling side, and the value add is just a claim to modernize crappy claims data or unify a bunch of previously disparate health data sources.
Maybe more advanced use of modeling will come later, though I am skeptical just given the way that once big enterprise customers dig hooks into essentially consulting services for health records, they won’t let go.
Another thing that creeps me out is when you see e.g. an ex Palantir board member joining health record data companies. It’s not hard to understand why a big insurance company, or at worst even government agencies, want decision tools on top of huge stores of health record data. Do we really benefit from some super new / unproven startup building up that data set & tooling?
I had posted this previously in a different thread, but it's all smoke and mirrors.
I work in the diagnostic imaging space, we are working on integrating AI for assisted diagnosis or screening. It's a several-year road map for some base function.
We had a top sales executive leave for IBM Watson Healthcare division about a year ago, was very proud to be part of what Watson was going to bring to healthcare
He's been working back at my company, couldn't stand selling lies (and wasn't making much commission with canceled contracts).
From a named company more impressed with what Google deepmind has done with NHS
Tangentially related: how does one get their foot in the door at a medical software company? Is there a big market for such professionals?
Learn MUMPS and move to Wisconsin.
Is MUMPS still a thing :-) maybe I ought to dust of my RT-11 / FORTRAN skills off
The biggest EMR company was Epic and uses Mumps [0]. However, they are losing share to Center and others [1], so this formula will only work for a little while longer.
[0] https://news.ycombinator.com/item?id=13860937
[1] https://ehrintelligence.com/news/cerner-epic-mckesson-among-...
Epic still dominates the EHR space.
They have deployments slotted for 3 years.
If you have ever worked with epic, they offer "managed" solutions, which you either do the epic designed way, or don't go epic... They have enough market share and business to dictate how things will work in a hospital env
You get at the door of companies in the healthcare space that need to get up to their game and tell them what you can do for them. That's what I did.
I was recruited right out of college to work at Cerner. Its probably still fairly trivial to get a job there, but you have to live in Kansas City.
I think this has more to do with the fact that Watson sucks than anything else, although it may be a harbinger for other efforts to sprinkle magic machine-learning pixie dust on complex problems in the hope for a solution.
In this case, you are probably right. But it seems to happen over and over again suggesting that the barriers to innovation are higher than they should be.
Yeah, to be clear, Watson failing is good news. This is marketing people losing their minds because they didn't understand the products they were selling, and hospitals failing to do due diligence.
I've talked to literally dozens of people / companies / academics running at problems in healthcare using deep learning and large data sets, and the recurrent theme is cowboys who pretty consistently fail to follow basic practices of experimental rigor. ML is frankly an area where more rigorous and clear regulation is desperately needed, because the potential to cut corners / cheat is so easy and the challenges of vetting these systems are specialized enough that most organizations won't successfully be able to catch fairly basic experimental mistakes.
Many of those barriers have historical reasons for existing -- which do you think could be lowered without risking patient health (or yielding sufficient benefit to be worth some added risk)?
Probably. The following applies to the US: It seems to me that some of the biggest barriers involve getting access to enough data to make a good training set. Some of this is related to HIPAA and related privacy laws, which are in place for a good reason but are still a major barrier. The other big factor is the fragmentation of the data across different, non-interoperable, non-standard formats from different vendors who have intentionally made interoperability difficult. That part bothers me much more. Many patients have data spread across a dozen or more paper charts, lab systems, EMRs, pharmacy databases, etc. As with most data science tasks, the "data munging" is the hardest part.
I care about this mostly from the perspective of treating patients. Beyond training sets for AI, I need those records for the same reason: to make appropriate and informed treatment decisions.
From the outside looking in, it's the kind of situation that begs companies to over-promise on results to get enough money to even give it a shot.
Unfortunately, from my experience the biggest barriers to sharing data are very deeply embedded incentives.
I was at a talk recently by an executive in charge of managing data for a massive health system. Think tens of millions of patients. He said they dont share data because if people learn their real quality metrics it would give payers, the public, anyone with an interest that isnt aligned w the hospital system ammunition against them. Have heard this sentiment echoed by many. Open data is their enemy because then patients could choose based on outcomes, not on market power or how nice the lobbies are
The EMRs designed a closed garden into their systems DNA. If data is shared freely then it's easy to switch systems, and they will have to compete based on cost and quality and couldn't charge hundreds of millions for installations and maintenance contracts
Call me cynical but this is all based on conversations with people well placed in the industry
I am working on a medical dictation app and the lack of any interoperability between 1000s of EHRs is mind boggling to me. There is absolutely no standards and like you mentioned its completely deliberate. Working through these platforms over the years has made me less and less optimistic about having a better health care system in the future.
Well FHIR is making its way, that looks positive, but no mainstream EHR has adopted it really.
May I ask what your target users are? I sell alot of nuance PS360, and fluency
As a patient, I'd love to be able to archive/hold my own comprehensive copy of all of my health records.
Airplane security has a historical reason for existing too, but that doesn't mean the benefits justified the costs, either now or in the past.
The two massive barriers to entry are regulatory uncertainty and sales difficulties
Regulations: Up until recently, the regulatory landscape for medical software was extremely confusing. People were sitting on the sidelines waiting for FDA to figure out how to deal with machine learning, agile development, app stores, blah blah. Startups that wanted to jump in were staring at yearly 100K+ regulatory consultant bills. FDA is now actively de-regulating the bulk of medical software, and will create a regulatory kit which should better explain the regulations in an actionable way (instead of "create a design history file" without much explanation of wtf that means in practice). This should ease pains on that front
Sales: This is where the big pain is. Getting paid for shit in healthcare is just too damn hard unless your product is on the hospital operations side of things. Getting an FDA approval (assuming it's required) DOES NOT mean one of the thousands of insurers will decide to reimburse you for your product; many require multiple clinical trials for reimbursement purposes; and many will decide to stop covering your product for stupid reasons (ie some editorial in the "No One Reads This Journal of Nonsense" said it was unclear the product was effective). Good luck getting patients to pay for anything, as they expect all products to be covered by insurance. Good luck convincing doctors they should use your product, as they are already overwhelmed with data entry / shitty UIs and don't want to perform any additional clicks or workflow changes.
The FDA is not the primary source of regulation in medical software, it's CMS. And while CMS regulations do not apply theoretically to private insurance and private practice, Medicare and Medicaid represent such a large segment of the healthcare economy that private insurers tend to use similar expectations as CMS regs.
Also, software which cannot meet CMS reg and cannot be used for Medicare/Medicaid patients is a non-starter for the majority of the healthcare industry.
The idea that our medical software space would be mostly deregulated is a rather horrifying thing in and of itself. I don't want my father being cared for using beta, buggy, software. I have high expectations for quality and risk management from healthcare software that I do not have for some agile beta video game or social network.
I think your "bulk deregulation" comment is rather wrong. If you are coding in this space and are not INTIMATELY familiar with the concept of PHI and HIPAA, then you will be writing software which violates core regulations in this space.
Your concerns on the sales side also do not mention "RISK" even once. You don't speak even a little bit of our healthcare space. Those doctors want risk management. What happens if/when the software screws up? Whose fault is it that the software is literally response for killing patients? You also don't mention what doctors care about: does it meet CMS regs? Does it meet state regs? Will I be able to submit my obscure XYZ state form? Will I be able to submit to BCBS, Aetna, Medicare? Will they accept? Etc etc. You will be writing custom software for customers all over, because the regulations that states and even cities like NYC might impose could be imposed on a single customer of yours only.
I think people who want to disrupt healthcare forget that healthcare IT is literally life or death. Your bug isn't just an annoyance, it could be the thing that ends someones life. Regulations aren't just a barrier to agile development, they're also a life saving tool to ensure that risk management and data privacy are established in all products that get used on patients in our country.
> The FDA is not the primary source of regulation in medical software, it's CMS
It's totally dependent on what you are doing. Sure, HIPAA / PHI concerns are the primary issues for people who generally make software for the medical space, but by "medical software" I mean software that FDA might classify as "software as a medical device" AKA FDA regulated software. If your software falls under this designation, as most diagnostic medical AI systems like Watson would, then your primary regulatory concern is the FDA and you are held to much more stringent software development requirements than someone that has to deal with some PHI concerns.
In this context, my bulk de-regulation comment is 100% correct, as the FDA has basically dictated almost everything in the space to be unregulated (all back office products, general wellness products, enforcement discretion products, MDDS, even CDS which almost certainly should be regulated). They have literally gutted Class 1. You basically have to be writing something that is going to provide a diagnosis or treatment plan, control an existing medical device remotely, or mimic an existing regulated medical device in functionality in order to be regulated now. Reasonable people weren't scared to enter the space because they might have to deal with PHI for HIPAA concerns, as it is well known how to do that at this stage; they were scared to enter because there was (and continues to be) some uncertainty as to how novel product classes will be regulated by FDA.
> Your concerns on the sales side also do not mention "RISK" even once. You don't speak even a little bit of our healthcare space. Those doctors want risk management. What happens if/when the software screws up? Whose fault is it that the software is literally response for killing patients?
All FDA regulated software is mandated to have robust risk management planning prior to approval (ie ISO 14971 compliance). Other than that, the vast majority of products that a patient will be interfacing with can't do them real harm if they are buggy. That is precisely why the FDA has chosen not to regulate them.
> You also don't mention what doctors care about: does it meet CMS regs? Does it meet state regs? Will I be able to submit my obscure XYZ state form? Will I be able to submit to BCBS, Aetna, Medicare? Will they accept? Etc etc. You will be writing custom software for customers all over, because the regulations that states and even cities like NYC might impose could be imposed on a single customer of yours only.
I don't mention doctor wants because doctors aren't the buyers most of the time. Payers are the buyers, hospitals are the buyers, or patients are the buyers. How many pieces of software that you know of are sold directly to the doctor?
My company software is considered a Class 2 medical device.
We have to go through so many crippling hurdles with FDA it's almost not worth it.
You are spot on.
To add though, In atleast the diagnostic imaging space, radiologist reading groups/practices are selecting their own systems more and more nowadays.
Individual doctors? Likely not.
So why do you need FDA regulation in your situation? I know people handling security regulation in the defense industry and I'm curious how you know the FDA has to approve you as opposes to CMS and what it entails.
For diagnostic imaging (x-rays, CTs etc) which we make software for, alot of equipment is FDA regulated.
We chose to go for class 2 designation in FDA terms as it shows you comply with uptmost regulations, which is a selling point. Some hospitals depending on state gets rebates for meeting compliance.
It depends on what space you are going into. If it touches a patient, more than likely FDA regulated in the USA.
Other countries have similar orgs ( Canada has... Health Canada).
We also work with DoD/VA... Don't get me started on compliance with them.(the security stuff is a plus though, while initially challenging to meet, it has us pushing the code into commercial space products, everyone benefits from the extra security hardening. )
What it entails depends on the product. FDA has 2 regulatory classes which require a premarket notification, class 2 and class 3. Class 2 submissions generally require you to comply with a set of software development standards governing testing and validation, requirements generation at multiple levels, interface design, risk management, traceability, design, and actual development. You are required to handle customer complaints, safety problems, and maintenance in specific ways. You are required to produce customer facing documentation in specific ways, have installation plans, etc. Finally, you need to register yearly with the FDA and have all sorts of training and infrastructure planning. Proof of all of this is to be constantly generated (think every test report) and it all gets put into a massive file for FDA review. There are also some writeups you need to do explaining why you are class 2 as opposed to class 3. This generally involves finding a class 2 piece of software which resembles yours and saying your software is basically doing the same thing. This is typically referred to as a 510(k) submission.
There is also a pathway you can apply for if there is no existing device with a class 2 designation which resembles yours, but you believe your device is still fairly low risk. That is called a de Novo submission.
A class 3 submission generally involves even stricter requirements for development plus a clinical trial to prove efficacy and safety. The most common one if these is called a PMA which stands for premarket approval.
Also, if you are FDA regulated you still have to comply with CMS regs. It is an extra layer of regulatory scrutiny.
Glad to hear things are improving on the regulations front. Sales is going to be hard for a while, but I think that there are some markets where this is less of an issue, as it's more cash-based anyway. It makes me sad, though, that a lot of the population-based preventative stuff for which AI would be a great fit is unlikely to be a financially appealing investment.
Sales: skip it. Cardinal, Leica, Stryker, and others already have sales workforces. Cut your deal with one of them instead.
IBM is a confusing one. For a while I was getting IBM powered blockchain ads coming through my Instagram feed, and I seem to remember stopping by their booth at Dreamforce a few years ago where Watson would help determine what candy they'd give you (or something like that). One thing I'd be interested to know is what they actually do vs. the marketing spin of what they do.
I also used to work for Watson. It remains one of the best jobs I've had. There's a number of useful services available through Watson on the IBM Cloud. I'm a little biased toward Watson Assistant which can be nicely customized and integrated with other solutions to create some terrific customized virtual agents. You can also easily train natural language systems using Knowledge Studio.
I helped build that candy machine demo! It was admittedly kind of silly, but it was fun to do.
Sadly IBM has changed a lot and I ended up in a different role not too long after, and then leaving the company entirely a year or so later.
To answer your question, though, one of the things Watson actually did was pretty powerful speech-to-text and text-to-speech engines. Out of the box they were approaching the best in the business, and with some customisation they could beat any other solution available for purchase. It was also far easier to use in a web browser than any of the competition (I'm biased here - I wrote a lot of that code - but I don't believe I'm wrong.)
I really enjoyed working with that part of Watson.
Very sad for long time IBMers. But Is it the beginning of the much controversial AI Winter revival ?
It seems Watson was founded on the false premise that lots of hand coded things could be called an “AI”. It’s the victim of its own winter, because it was never going to live up to its own hype.
I couldn't have put it better myself.
Participating in Jeopardy was quite an achievement, but to extrapolate that system and selling it as a "solve all" is to over promise.
To be fair, it did more than participate. It dominated.
Yeah I kinda remember reading it being accused of being vaporware. Then I googled "Watson vaporware" and got the following [1]. Definitely think they are in that part of the hype cycle.
[1] https://www.technologyreview.com/s/607965/a-reality-check-fo...
edit: "trough of disillusionment" is what I was looking for
Interesting article, but I'd bet a big part of the reason physicians don't know it's that do much is dependent upon information that lives "out of the system".
How much time will a patient spend on rehab, and will they actually perform the exercises effectively? What's their diet like, not just calories in but in terms of individual nutrients? How active is their lifestyle?
Recovery time seems like something that should be knowable by educated doctors, but the best poker player in the world can't tell you what card is coming up on the river.
Do you imagine that other, more successful "AI" services from large companies do not involve a lot of hand-coded things?
Yes. Well, at the very least, they don't involve only hand-coded things.
The ones that aren't hand-coded tend to act poorly, or racist, or worst.
Interesting. Do you have some references about this 'hand coded things' ?
"Hand-coded" may be an exaggeration, depending on your field.
ML systems need good training data (where "good" might mean "extremely voluminous") and careful tweaking of parameters. I think an ML expert would call a system where a substantial amount of time is spent here "hand-coded".
This isn't really anything to do with machine learning, and just IBM's inability to bring innovation to market (particularly in industry solutions.)
What would you expect from a company run like a hedge fund by a bunch of marketing hacks? Half the original Watson research fellows and executives have left to start their own companies, having seen the writing on the wall years ago.
My friend who survived the layoffs indicated that this was due to in large part changes in the political/regulatory landscape that made it so the companies they were working with would no longer require the features that IBM was providing.
What changed?
Watson could never really deliver on the AI healthcare integration and diagnostic support. Basically it became a status symbol for hospitals to have, but it was dead weight. https://www.statnews.com/2017/09/05/watson-ibm-cancer/
I was hired by an acquired Watson Health division a little over a year ago, and after seeing the very confusing story of how IBM expects to grow and strengthen these disparate companies, I'm not surprised. I'm also saddened that some of my former colleagues were among those laid off.
IBM seems expert at stealth layoffs. Possibly to avoid lawsuits?
https://www.propublica.org/article/federal-watchdog-launches...
Is there a full breakdown of scope anywhere? I know there have been major complaints about Watson Health before, but I still can't tell if this marks a retreat from the program, or just a harsher-than-usual instance of the sort of layoffs pretty common for acquisitions.
> “The message was that there are about 7,000 people in Watson Health today and this was a cost-cutting exercise. 90 days’ notice with 30 days’ severance.”
If accurate, that severance benefit is just shameful, and simply not good enough from a large corporation. Absolutely no excuse for that degree of unjustifiable, unmitigated greed to offer so little when possibly damaging lives, especially for older workers who may face IBM-like ageism elsewhere now too.
Hard fought career advice I’ve learned: always negotiate adequate severance.
It should be a heck of a lot more than 1 month, even if you’re a junior employee. Just turn down jobs if they won’t give substantial severance agreements. Unless you’re in dire straits & have to take the job. Otherwise, don’t do it. That employer will not consider the impact that some surprise layoff has on you. Get it in writing, up front in the negotiation, and don’t waste time with firms that won’t offer it. Not worth it.
I am flabbergasted by your reaction. This is the most IBM-like thing I have ever read. I'd be shocked if I wasn't reading this about IBM.
You're not going to get changes to your employment contract there from negotiating, that's magical thinking. And they will lay you off (sorry, RA you) without a second thought. The answer is "never work for IBM", and if you do, know this will eventually happen to you and plan around it.
Severance packets is probably not an easy thing to negotiate when you are trying to sign an employment contract.
If it’s a serious company, it’s just an expected, standard part of negotiating, no different than salary, bonus, benefits, vacation, etc.
In general, you should ask if you’ll be required to sign any NDA, non-compete, or non-disparagement agreements, either as part of a company handbook or as standalone documents. If so, the severance package should be commensurate with the duration and condition of those.
It is common to negotiate severance equal to your salary for the duration covered by the agreements, or at least a large fraction like 30-50% of the duration, so a competitive severance package would certainly be in the range of 4-6 months of your base salary.
You can also negotiate to have company-paid insurance benefits extend past your termination date as part of severance, to avoid needing COBRA in the US.
The severance agreement should also cover anything like a company issued laptop, etc., if you are promised you can keep it after employment ends.
To get this, they’ll likely require you to sign waivers to any additional monetary claims, and draconian IP, NDA, etc., agreements. And this is why you should force the issue of negotiating severance up front and why you should walk away from companies (like IBM) that won’t negotiate.
Otherwise, they’ll announce the restructuring or layoff and then hold you over a barrel, by claiming you have to honor their company policy non-compete anyway, and pressure you to sign restrictive documents at your HR exit meeting, generally offering some insultingly low severance benefit, like only a few weeks or months of pay and no continuation of benefits.
All of this advice is specifically for junior-level employees as well. Never let anyone treat you like you cannot negotiate severance just because you’re a junior employee. Walk away from those companies.
For experienced employees, the amount of severance should absolutely be at least 6 months of pay and continued company-paid health coverage, and in many cases you can negotiate for it to fully match the duration of the non-compete or NDAs, usually 12 months.
Above all, don’t accept any baloney nonsense about “standard policies” restricting severance to a small number of weeks of pay per each year of tenure, or any of that garbage.
That’s just the standard line they feed to people who don’t negotiate. And if a company refuses to be flexible on it, walk away.
I hear what you are saying, but feel like it also requires the caveat of "for in-demand roles where one has leverage in the negotiation."
At this point, a junior engineer may have some of that, but I'm not sure you can really apply that to all levels of experience across all functions. I agree with the sentiment and logic behind it, I just am not sure that is actually something most people could consider.
Not surprising. Many “tech” companies enter the field with arrogance, thinking of course they can do health better. They unfortunately often have no comprehension of the regulatory regimes for the markets they enter and therefore the projects are doomed to failure before they start.
No need for employees when Watson can do it all. Unbeknownst to these people, they renamed it Watson Health due to the fact Watson is going to run the entire division itself. Amazing! The future of AI is finally here! Watson has spent the last few years learning all these peoples' jobs.
This sort of sarcastic, content-free comment doesn't contribute much, and isn't a great fit for this community.
I disagree, I think it is satire showing the over-hyped nature of Watson and AI in general during this latest VC investment boom. HN still has room for humor
I disagree too. I think that humor should be allowed on hacker news. I mean as long as the jokes aren't at the expense of the ideology of the majority of hacker news's users. In that case, that humor should be downvoted and removed.
I agree that there's room for humor. But for that, it should be well-written satire that makes that point.
"[T]hey renamed it Watson Health due to the fact Watson is going to run the entire division itself" is a complete non-sequitur, just filling up space.
That's not what a non sequitur is.
Massive layoffs at a company that hypes its AI and is actually named after that same technology is called "irony", not non sequitur. I'm disappointed for having to explain this.
I find the discussion relevant to Tesla's autopilot much?