Settings

Theme

The Alan Turing Institute has failed to develop modern AI in the UK

rssdsaisection.substack.com

172 points by martingoodson 3 years ago · 155 comments

Reader

cs702 3 years ago

Quoting Rich Sutton, who wrote the perfect response some years ago:

"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation."[a]

The very smart folks at the Alan Turing Institute are learning firsthand how bitter the lesson can be.

---

[a] http://incompleteideas.net/IncIdeas/BitterLesson.html

  • jmole 3 years ago

    I think the problem is actually worse than the article implies, because there are two things being leveraged here: compute and data.

    Short-term improvements made by domain-specific AI result in better outputs than more general AIs, ceteris paribus. But these better outputs can then later be fed back into more powerful general purpose AIs, and consuming the data*compute product from the domain-specific models is a very effective way to train domain-specific behavior.

    Today, we see this in reverse – people are training smaller models based on outputs from GPT-4. However, I expect that we'll start to see more and more training going the opposite direction in the future: Domain-specific generative models will be used to build scenarios for large general-purpose AIs to train against.

    Here's a concrete example – image diffusion models are really bad at physics, so you can't tell one to draw a person upside down, because it's not well-represented in the dataset, and if you force it to with something like controlnet you typically get a disfigured and horrific image. So obviously diffusion models are not the best long-term solution for image generation. But how do you get this concept of "upside down" into an AI model? Well, maybe you add some kind of neat segmentation technique that involves using several diffusion models and rotating and stitching together their outputs. Great, you made an upside-down generator.

    Now, you generate 100,000 images of "upside down" people, and the next advance in image generation AI can come along and learn that concept with ease thanks to the larger data set that it has.

    So it's not just that "more compute wins", it's more like: not only does more compute wins, it wins even more because short-term improvements feed directly into the data pipeline that enables it to win.

  • adventured 3 years ago

    Along that line re Moore's Law, the biggest clear advantage the US has in regards to AI, is: Nvidia, AMD, Intel (and obviously particularly Nvidia up to this point, although AMD and Intel are producing some potent GPUs).

    The reason the US was able to pull off a leap forward via OpenAI or LLaMa, is due to having Nvidia as basically a national treasure, but it's an integrated whole, the US has all the components necessary and the ecosystem that produces all the components (including talent, thought process, money, start-ups, pay scale (to lure talent)).

    The Europeans have never lacked for the brains side of being able to do it, certainly. Until they fill out the rest of the ecosystem they won't be able to really compete in AI (they'll lag far behind, with lots of blaming and empty promises big government projects). And China is its own worst enemy these days, all we need is for Xi to remain in power indefinitely and he'll throttle their potential as a global competitor.

    The US is close to locking up another round in the tech wars, riding on the same approach that has served it so well since WW2. Hopefully our do-something legislators are hands off long enough (ie don't snatch defeat from the jaws of victory).

    • reedciccio 3 years ago

      And data!! Don't forget that all the large accumulators of raw data available for commercially-supported research is in the US. A brutal combo of all 3: hardware data and competence available in one country. China has two of the 3, and until recently had free access to hardware, too. Europeans lack data and hardware.

      • eigenket 3 years ago

        Deepmind was literally founded in the UK and is headquartered in London.

        Comercially supported research is (in my opinion) relatively OK in Europe, its just that the big US tech companies are so big they'll just buy you if you do something sufficiently interesting.

    • cs702 3 years ago

      > including talent, thought process, money, start-ups, pay scale (to lure talent)

      ...and a culture of risk-taking and ambition to change the world.

      • adventured 3 years ago

        Yeah I tried to cover that with the start-ups implication. The US has that in spades in regards to aggressively funding start-ups by the zillions when a new tech inflection hits (whether software, Web, mobile, cloud and now AI). The VCs always go overboard, which is ideal (billions in destroyed capital is meaningless compared to producing the next Nvidia, Google, Amazon, Microsoft, etc).

    • worrycue 3 years ago

      But Nvidia’s GPUs are sold all over the world … how is that an advantage to the Americans?

  • luplex 3 years ago

    At least in the field of computer vision, there seems to be lots of algorithmic progress too. The algorithms improve every 9 months by an amount equivalent to a doubling of compute budget.

    https://epochai.org/blog/revisiting-algorithmic-progress

    • famouswaffles 3 years ago

      The bitter lesson isn't really "algorithms bad", "don't try different approaches", "don't innovate" or "only work on models with massive compute".

      The heart of the bitter lesson is "don't try to codify "insight" into the process". It's basically the age old "you don't know what you don't know".

      The Transformer is kind of a perfect example. It boasts algorithmic improvements over RNNs and LLMs are by far the best performing take on language modelling ever. And yet the architecture itself has basically no breakthrough from understanding language itself. It's an improvement over standard RNNs but not really because of any new found insight or implementation on language itself.

      Basically trying to cram human high level instincts/insights into the process of solving a problem doesn't work better than giving a general architecture tons of data and letting it figure that all out by itself.

      • godelski 3 years ago

        > The heart of the bitter lesson is "don't try to codify "insight" into the process".

        This is exactly right and what a lot of people get wrong. Sutton isn't saying that you can't have constraints in your network either. He also isn't saying "no need to learn math", which is a far too common interpretation I've seen. It isn't just data and scale, algorithms are critical too. Just don't force aspects like Gabor filters, symmetry, etc. This doesn't mean works like geometric deep learning are dead (alpha fold even uses it!). The reason to not force insights is because they sometimes don't hold in high dimensions and sometimes our assumptions are wrong. It can also limit the path to reach the optimal/desired solution even if the optimal solution has those constraints. But I am specifically saying "force" because we can hint and we are always using some human insight.

      • FeepingCreature 3 years ago

        I'd argue it's even "you don't know what you do know." We cannot codify what we don't understand, and while we understand and can verbalize some parts of our thinking, others, maybe even the great majority, are hidden from us. We just get a feeling.

      • Retric 3 years ago

        LLM’s do use human ”insight” into language with how they require tokenized inputs and outputs.

        It’s one of those insights that seems obvious after the fact but really wasn’t.

        • famouswaffles 3 years ago

          That could count I suppose but I don't think that's really the kind of insight Sutton is alluding to in his original writing. Insight in this case would be more like shoehorning one of the processes humans would use to solve the problem. There are no innate grammar rules the architecture looks to before each attempt, no tree or word search. Things like that.

          Polishing the input in that way is neat but it's not like you can't go character or word level for a transformer. The current way is just far more compute efficient but the Transformer will figure out the seq to seq all the same.

          • Retric 3 years ago

            It doesn’t just polish the input. Tokenizing the output also significantly reduces the risk of gibberish especially if you do a grammar pass to ensure tense matches etc. It means a model with a much worse understanding of the language can preform better than something operating on raw characters.

            • famouswaffles 3 years ago

              Fair, I didn't mean to dismiss the impact of tokenization as such.

              But tokenization is still a process that's figured by another DL model. Human "insight" doesn't produce tokenization as it does. Another model trained on [insert language(s)] text figures out how best to break sentences into token parts.

              That said, these things are a spectrum. I don't think, "no tips from biology whatsoever" or "no constraints at all" is really what Sutton had in mind. The less of it the better is the general idea.

              • Retric 3 years ago

                Good point. I find it really reminiscent of how Alpha Zero ignored essentially all human knowledge about chess play, but still depended on insights into chess AI / search algorithms.

                I think of deep neural networks as replicating long term memory/reflex rather than thought. I don’t know if that’s quite it, but they excel at a lot of very difficult AI problems when paired with just a tiny bit of handholding. Some of that might go away with even more compute, but I think approaching AGI is going to take more than just even more compute.

      • nailer 3 years ago

        > Basically trying to cram human high level instincts/insights into the process of solving a problem doesn't work better than giving a general architecture tons of data and letting it figure that all out by itself.

        Hi, programmer from outside ML here. You might be able to answer something I've been wandering about.

        I do remember things like NLTK and logical inference many years ago. I understand the current tech is all large language models and (as you put it) the model figures out the rules.

        Sometimes I get responses from ChatGPT that seem like they wouldn't pass logical inference. I will think "all the foos aren't capable of X, bar is an instance of foo, stop suggesting bar to do X". Is there room for old-school logical inference as a kind of sanity-check layer on top of LLMs?

        • HPsquared 3 years ago

          I wonder if they'll end up with specialized subunits for different processing tasks, like the old "lizard brain" model with the neocortex on top of other layers:

          https://en.wikipedia.org/wiki/Triune_brain

        • famouswaffles 3 years ago

          Nothing wrong with that at all. Could be a viable solution for specific use-cases. But for know, most researchers will focus on innately improving those abilities. Right now that would mostly be by increasing scale (data or parameter size), highly curated data for the specific deficiency or work on making transformers scale more efficiently. after all, GPT-4 is much better at logical reasoning than 3.5 and we still haven't hit a functional limit on scaling transformers.

      • shawntan 3 years ago

        But "don't try to codify 'insight' into the process" seems to suggest "don't try different approaches". I'm not sure how people can at once trot out the "Bitter Lesson" and interpret it as it is written, but still say "We're not saying not to think about new approaches".

        Is the idea then to work only on methods that allow for faster compute of more data?

        FWIW, the Transformer works faster on current methods of parallelisation, allowing for dramatic scaling that RNNs will find hard to compete on. But we do pay for that in terms of what can be computed (https://arxiv.org/pdf/2207.00729.pdf - TL;DR: Transformers are limited in the types of programs/functions it can compute because of parallelism).

        Scaling, ironically, does seem to be the 'direction of steepest descent' in terms of what will bring the best performance (for now). Gradient descent does find pleasant local optima that may keep us happy for a while.

        • famouswaffles 3 years ago

          As far as approach is concerned, all the bitter lesson advises against is trying to shoehorn human high level processes into the architecture. There's still plenty of room for different approaches outside of just faster compute.

          CNNs and Transformers are very different. Both can be used for computer vision. The bitter lesson wouldn't stop you from switching from one to the other.

          • shawntan 3 years ago

            The scope of "what to try" is large, we (as a community) should prioritise things that we think would work. If the criteria is not only "faster compute" it would seem "things that mimic human high level processes" would be a good candidate.

            We started with MLPs then CNNs were invented, and that brought on pretty large gains. Arguably CNNs are architectures inspired by "human high level processes".

            Edit: I will say though, this is a new take on the nuance of "Bitter Lesson" that I've never heard, though even this interpretation I find to be strangely contradictory for the reasons above.

            • famouswaffles 3 years ago

              >it would seem "things that mimic human high level processes" would be a good candidate.

              That's the natural intuition yes. But I believe Sutton's point is that this very intuition seems to prove itself wrong in the long term.

              The way I see it, the problem with the high level is that we don't actually know shit. If we knew so completely what it took to model language or vision in the first place, we wouldn't need deep learning at all.

              It seems intuitive that trying to bake in some basic grammar rules might speed things up along.

              Problem with that is that we often end up overfitting the models to those specific rules and constraints, limiting its ability to generalize and learn more complex and underlying patterns and structures in language. Patterns that we don't actually know of.

              The low level processes result in the high level performance but not vice versa.

              It's said that the one human neuron is equivalent to a CNN. I wouldn't really call the operations of neurons high level though.

              • shawntan 3 years ago

                Right. So where I end up on this, given the examples of intuitions that DO work, is it's always the _right_ levels of prior knowledge that's needed. The intuitions on language (encoding basic grammar) didn't pan out, but the one for vision did (CNNs). What further levels of intuition could we use to improve even the large language models?

                That, of course, requires experimentation. If it's not speeding up scaling (of course this should be done), and it's not mimicking human cognition (Bitter Lesson says no), what do you decide to try? I guess I'm missing what other heuristics there are to use here.

                Just looking at the current state of where NLP is going: Prompt engineering and its various 'step-by-step' siblings are all pretty high-level human cognition motivated to me. Shouldn't that go against the bitter lesson as well?

                "The Bitter Lesson" feels like an article that was written at a time when the intuitions that went into deep learning have become common-place, and scaling things up get a lot of leverage out of the 'insights' that came before. Once the returns have diminished to a point of saturation, the 'insights' will likely once again be useful, until methods to scale catch up once again, and "The Bitter Lesson 2.0" will be making the rounds.

      • cs702 3 years ago

        Bingo!

        That is the bitter lesson.

        Thank you for posting this here!

    • version_five 3 years ago

      Also, while it gets lost in the foundation model stuff, a major trend in computer vision is toward smaller, high quality datasets. Arguably CV had its V1 llm moment years ago with models trained on imagenet, which produced amazing general results but weren't good enough for much specific stuff.

      If you look at what, e.g. Andrew Ng was talking about last year, there was a big emphasis on "small data" and getting good datasets.

  • diego_moita 3 years ago

    Funny plot twist: the pioneer on leveraging computation on neural networks is actually a British: Geoffrey Hinton, living in Canada.

    Btw, Rich Sutton was born in the U.S. but renounced his American citizenship, becoming Canadian.

    • NalNezumi 3 years ago

      And a MORE funny story is,(according to an coworker of mine, whose PhD supervisor was a friend to Hinton) that when Hinton was looking for university to conduct his work in, he was rejected by the UK universities which was his first choices. So ended up in Canada.

      So the plot twist comes with a bit of irony!

      • mafribe 3 years ago

        Yes, Hinton was on a temporary position at the University of Sussex (IIRC, the Centre for Cognitive Science) for a while, but was not offered a permanent academic position there when he applied.

    • fulafel 3 years ago

      Also, the history of the the underlying advances is a lot more international than current popular telling of the history lets on. See eg https://people.idsia.ch/~juergen/scientific-integrity-turing...

  • ragnese 3 years ago

    As someone with no knowledge of the fields of machine learning and artificial intelligence, I wonder how much of recent AI stuff is due to "true" Moore's Law (GPUs and such getting much faster/cheaper), and how much is due to the data version of Moore's Law (web-scale data farming/storage to "teach" LLMs and such).

    • cs702 3 years ago

      Making use of more data requires more compute (e.g., longer training, more powerful hardware, or both).

      • TremendousJudge 3 years ago

        At this point I'm not sure progress is due to Moore's Law as stated originally (cheaper compute) than it is due to companies just spending more on compute. Effect is the same for now, but with a clear limit.

  • dr_dshiv 3 years ago

    Let’s be honest, though—very, very few people expected large language models to be so ungodly effective.

    • readthenotes1 3 years ago

      Perhaps many many many more people would have made the bet on LLM versus NFTs or virtue signalling with “Data as an instrument of coloniality: A panel discussion on digital and data colonialism”, right?

    • clueless 3 years ago

      is this really true?

  • xtracto 3 years ago

    The EU+UK need a project like the large hadron collider but for AI: Develop a really really large computational infrastructure that allows researchers to study AI experiments with technology that may be 20 or 30 years far away from being commoditized.

    • dr_dshiv 3 years ago

      They are doing that for quantum computing. But 20 years is about right.

      It seems like Netherlands completely missed the boat on LLMs, too… but I don’t blame them. I just hope they pivot quickly.

owlbite 3 years ago

If its anything like other UK government research bodies I've interacted with in the past, they started off with pay significantly below market rate for salaries and then failed to even hit inflation with pay raises due to UK government policies. Any fund injections are one-time capex for headlines and never any new recurrent funding for keeping the lights on, paying the staff or funding the research. The fact that these places produce any decent research is more a testament to the dedication of the staff (who have often been there since before the several decades of penny pinching which drove them into the ground) than anything the government does.

  • roncesvalles 3 years ago

    That's the first thing that came to my mind as well. UK tech salaries are notoriously low.

    Let this be a harbinger for the rest of the industry -- you can "get away" with paying far below market for talent, but you make yourself vulnerable to getting generationally leapfrogged like this because the cutting edge work in your art is happening under someone else's roof.

version_five 3 years ago

Not focusing on LLMs isn't a major sin imo, if I was donating money to an institute I'd rather they were doing something unique than churning out another also-ran llm. Researchers have diverse interests and expertise, and the field has way more depth than the current thing, it's not obviously bad they were working on other stuff.

That said, if the examples he gives of what they were working on are representative, it implies they are spending their time chasing trends instead of doing fundamental research and chased the wrong ones. I'd suspect they re doing more than what was implied though.

  • martingoodsonOP 3 years ago

    Of course there are pockets of good work, like in any research institute. It's difficult to point to anything really pushing the envelope in AI research though. You have any evidence to the contrary?

matthew9219 3 years ago

Glassdoor indicates that the Alan Turing Institute pays senior research associates $49,015 per year, senior project managers $48k, and senior research fellows the same. I wasn't able to identify any roles there that paid above $50k/year.

Is it any surprise that when you hire people with PhDs and you pay them less than bartenders that you don't get the best people? Those people go to Google or Microsoft and make 10x as much.

  • coastermug 3 years ago

    Bartenders do not earn £50k in the UK. £50k is considered an above average wage, albeit PHDs can earn far more working elsewhere.

    • matthew9219 3 years ago

      In Seattle, my girlfriend made $65k last year cutting hair for dogs. She has a college degree, but no PhD, but doesn't use the college degree.

      I made $350k. I have a college degree in computer science, but no PhD.

      Why would any top talent take a job at 50k?

      • coastermug 3 years ago

        UK vs USA. You’re not wrong at all though. It’s also very interesting the lack of salary transparency that exists in the Uk. If you don’t poke your head around London, you can be blissfully unaware of the salaries that are available in Finance and broader tech scene in London. Given that many great UK universities exist outside of London, there will be PHDs who are unaware or don’t care about what they could be earning.

      • robotnikman 3 years ago

        Just curious, what tech position pays $350K? I make much less than that right now as a software engineer, though at a non FAANG company and in an area with low cost of living. If I could find a position that pays that much I'd happily jump ship.

        • matthew9219 3 years ago

          I'm a 10 year engineer at a FAANG. My promotion velocity is probably about average, maybe a bit above average, but not exceptional.

          • DaiPlusPlus 3 years ago

            Are you an IC, Lead, or in some kind of engineering-management role?

            I'm also in the Seattle area, with just-over 10 years' full-time experience and started my career at a FAANG and everyone in my orbit (with similar education, roles and experience) is around 150-180, I don't personally know anyone who has admitted TC over $200k.

            I'm not skeptical of your claim, but just sharing my experience.

            (Yes, Levels.fyi makes me feel insecure about my career accomplishments, but at the same time I don't interview well...)

            • fatnoah 3 years ago

              I'm in exec role at a 100-200 person east coast startup with significant YOE (both as IC and as an exec). My own comp, including bonus, is close to $300k. Most of our SWE roles are in the range of $165-200k salary & bonus, plus equity. It's been a very rare occurrence to have an offer turned down because of money.

              I came from larger companies (including a FAANG), and compensation there (salary, bonus, and RSUs) ranged from 1.5-3x my current compensation. Most of the people on my teams were in the $250k-300k range, with a few others at least doubling that. Some amount of that was driven by stock appreciation in a bull market.

              The money's definitely out there, especially at higher levels in public companies.

              • DaiPlusPlus 3 years ago

                > The money's definitely out there

                Right, I'm not denying that - it's just that Levels.fyi puts numbers that high at 90% percentile TC, the industry median is still significantly lower.

            • matthew9219 3 years ago

              I'm an IC. In Google terms, I was recently promoted to L6.

      • geodel 3 years ago

        > Why would any top talent take a job at 50k?

        If thats only thing available people will surely take it. Its like asking why would I buy million dollar condo in SFO when I can buy 100K house in Mississippi.

        Not everything is available everywhere.

      • Karawebnetwork 3 years ago

        Having taken a job at roughly that paycheck previously, I can tell you exactly why: my rent at the time in that area was $700 a month for a 4 bedrooms apartment.

      • ChuckNorris89 3 years ago

        >In Seattle, my girlfriend made $65k last year cutting hair for dogs.

        Because the US pays much, much more for the same jobs, than what the EU or UK pay.

      • rhuru 3 years ago

        Free healthcare ! :)

      • lowbloodsugar 3 years ago

        Because they are unwilling to emigrate to the USA.

        • questime 3 years ago

          Anecdotal but I feel like Brits are really big on emigrating or atleast working abroad. There are a ton in the US, Germany, UAE, Australia etc. And they tend to do very well outside the UK.

          • lowbloodsugar 3 years ago

            Indeed. Most of my friends no longer live in the UK. Even "average" friends from high school live abroad. I don't understand why people stay. What happens to a country when all those people leave? All those people with a mind that is capable of seeing a better future and acting on it? Who does that leave? Bunch of people who wave tiny flags and cheer a monarchy.

            • eigenket 3 years ago

              Its not exactly the same situation but I'm from the UK and live in Poland (because I'm a scientist and brexit happened). An enormous number of Polish people left to work all over Europe and the world since the end of communism, and I think its mostly been a massively good thing for Poland.

              Most of the Polish people I know who lived abroad and have moved back to Poland have brought skills, often highly educated children, relatively large amounts of money, and what I would describe as an international forward thinking attitude (i.e. they aren't voting for PiS).

              Its not inevitable that a lot of young people from the UK moving abroad is a bad thing for the UK, as eventually a bunch of them are going to come back.

              That said all the flag waving bullshit I see the UK doing does put me off returning.

        • troad 3 years ago

          They are, the US is just insanely strict with skilled migration.

    • tom_ 3 years ago

      $50,000 is more like £40,000.

      • lowbloodsugar 3 years ago

        Buying power and taxes means $50k goes as far in the US as £50k goes in the UK. Possibly more.

        • eigenket 3 years ago

          It strongly depends on where you are - someone earning £50k in probably London is having a much better time than someone earning $50k in Silicon Valley, but if the $50k person is living in somewhere less ridiculous they might be better off.

    • Dma54rhs 3 years ago

      50k a year is the top 10% earners. I think Americans working in tech how no idea how dire the salary situation is in Europe in general.

      SF wages would literally put them to the infamous greedy 1%.

  • nonethewiser 3 years ago

    When its not uncommon to start off at $35K after college, $50K is a pretty big increase. Not that there isn't a problem - it's just economy wide in the UK. Salaries are pretty low compared to the US. We can argue about all the factors but at the end of the day the skill will wind up in the US.

  • joe__f 3 years ago

    Bartenders in the UK are going to be earning less than half of that much. The comparison to wages in the USA is not direct because we have free healthcare and other social benefits.

    • gavin_gee 3 years ago

      you dont have "free" healthcare. you pay for it in increased income tax.

      • eigenket 3 years ago

        According to the OECD (https://stats.oecd.org/Index.aspx?DataSetCode=SHA#) the US spends about 18% of its GDP on healthcare while the UK spends about 12% of its (much lower per capita) GDP. Per captia the numbers are even more wild, the US spends about $12k per person per year while the UK spends about $4000.

        While the UK could (and in my opinion should) spend rather more money on healthcare than it does, the NHS is actually spectacuarly efficient compared to the mess of private providers and insurance middlemen you have to deal with in the US.

      • joe__f 3 years ago

        I understand our healthcare system is funded. Feel free to read my post again and replace 'free' with 'socialised' if it helps.

  • extasia 3 years ago

    I would add that some people working at the Turing institute are also tenured professors, so this is only one of their salaries. Not sure this applies to that exact position, but a person I know is a Turing fellow and professor at a university simultaneously.

    • sieste 3 years ago

      "Turing fellows" don't make any income from that role. They get a bit of travel money and access to the institute.

waffletower 3 years ago

"It’s concerning that none of the projects mentioned in this document, or indeed any other major open source AI project, arose in the UK." I believe this statement is at least partially incorrect. StabilityAI is a London, UK based company, and several of the researchers directly involved in the development of Stable Diffusion are UK citizens. https://www.crunchbase.com/organization/stability-ai

  • martingoodsonOP 3 years ago

    That's a good point. Thank you. Officially the open source model was released by the Ludwig Maximilian University of Munich’s CompVis lab. I agree that's something of a technicality.

  • _Wintermute 3 years ago

    And AlphaFold was developed across the road from the Turing Institute.

  • whywhywhywhy 3 years ago

    As pitiful as the Turing institutes contributions are from this article, the fact it doesn't mention that throws the entire article into question.

diego_moita 3 years ago

And the rest of the world goes, "meh..."

This is part of a trend: the birthplace of the Industrial Revolution lost most of it's manufacturing industry, the nation that once championed free trade is cloistering itself on Brexit, the inventors of football (a.k.a. soccer) are falling behind on the World Cup, the nations that once were vassals to the British Empire are drifting away from the Commonwealth, and Scotland ...

Well, it kinda looks like Great Britain is becoming less Great each passing decade.

In World War I, four empires collapsed: czarist Russia, Ottoman Empire, Austro-Hungarian and the British Empire.

The Austrians already accepted it. The Russians and Turks are refusing desperately to accept it. The British didn't even notice it.

  • andyjohnson0 3 years ago

    > Well, it kinda looks like Great Britain is becoming less Great each passing decade.

    I'm a brit and unfortunately I think you're broadly right. I wish my children were growing up in a more confident society that looks outward and forward, not inwards and to the past.

    The current government wants the uk to be an "powerhouse" (or whatever the latest slogan is) for AI, EVs, biotech, etc. But they're stuck in a view of the world that is somewhere between nostalgic nationalism and outright nativism. And pumping out constant culture-war paranioa about "immigrants" and betrayal.

    So we get brexit - an economic catastrophe - and their desperate need to be seen to be competing rather than collaborating on science research. Non-uk students avoid our universities because they can't get visas or because they're actively harrassed by the government once they're here. Scientists leave the UK for countries where they can get grants and decent equipment and actually do collaborative research.

    Meanwhile we build useless aircraft carriers that we can't afford to operate, and stage bread and circuses pagents in London - while the food banks have never been busier, and huge numbers of people grind away in poverty. But the important thing is that we're good at tradition and we have history.

    The country's industrial and scientific base is decaying, as is its infrastructure outside of the London bubble, and none of the grifters and fantasists running the country have a clue. I'll be advising my kids to emigrate.

    • abawany 3 years ago

      I'm not a brit but I see things a bit differently - the culture is innovative ('clever') and favors restlessness, which holds hope. Consider the raspberry pi and 3d printing, both relatively recent british innovations that changed the world considerably. The current government and their troubles aside, I have a feeling the british boffins will continue making a difference in the world and remain relevant on the world stage for a while yet. Also, all this talk of british weakness but no one would dare attempt a falklands for quite a while without getting a bloody nose for it. Finally, while brexit was more than a flesh wound, I feel confident the country will work its way through it eventually though it will likely require a different party in power before that happens. Edit: minor updates to post.

      • andyjohnson0 3 years ago

        > no one would dare attempt a falklands for quite a while without getting a bloody nose for it.

        I'm pretty sceptical that the UK could mount an out of area expeditionary war on its own anymore. In 1982 the british army had about 160,000 troops - now its down to 78,000. We dont have the air or sea lift capability, and the merchant ships that were requisitioned back then now sail under other flags. We have a couple of caarriers (built at vast expense) but we can't afford the planes to fully populate them, and we don't have enough ships and submarines to form proper carrier groups. At least one of them is flying US planes with USAF pilots, and last I read was encorted by US navy ships.

        The Falklands wasnt the pushover its often made out to be. And more recently we got our arses cicked in Helmand.

        > Consider the raspberry pi

        Yeah but that was 2012 and what has there been since? And does that compensate for ARM holdings (originally a UK company) de-listing in London, for example?

        • ThisIsNotNew 3 years ago

          ARM delisted in 2016 because they were bought by SoftBank and became private company. The fact they (and the others listing in US) are not going to EU tells you a lot, they view it lower than London.

          Helmand/Afghanistan was an insurgency war. IEDs, bombs under the road, shoot and scoot. The Taliban would run rather than take on NATO troops head-on. The US also failed, doesn't mean the US cannot handle a conventional war because of Afghanistan.

          • andyjohnson0 3 years ago

            Re ARM: you're right about the delisting date. I was thinking of their decision to list in NYC rather than London when they go public again this year. I suspect this says something about their lack of confidence in rhe UK, but thats not the only factor I'm sure. And I confused rhe point by referring to "delisting".

            Re Afghanistan: tha fact that the US can still handle a conventional war, post defeat, doesnt mean that the UK could.

  • ben_w 3 years ago

    There was a recent quote from Anand Menon that sums up my feelings:

    "Much excitement in the family whatsapp group this morning at the prospect of an Indian and a Pakistani bringing about the partition of Britain" - https://twitter.com/anandMenon1/status/1640612440051163137

    An Indian I work with also found this quite amusing.

  • justrealist 3 years ago

    That's a pretty simplistic view re: Turkey.

    Turkey did become irrelevant after WWI, but is far more of a power today than it was 30 years ago, and given the relative dysfunction of the neighborhood, it's only going to get stronger.

  • bee_rider 3 years ago

    The fact that they wouldn’t have the same industrial advantages as, say, the long dead British Empire, has to have been baked into the creation of this institute in 2015.

    I think it is just… the idea of doing something “in a country” is just generally an unnecessary and vexing constraint. “American” companies are leading the pack here because America is a nice regulatory environment for a Global company to plop down a headquarters.

    Also it is research stuff, there’s a random aspect to it. We’ve just got more dice to roll by virtue of having more researchers in the US.

  • meroes 3 years ago

    The UK in the 1880s to now is a very large decline in world power. It might be the largest decline while remaining intact of any nation. In that respect it’s actually doing well

    • ben_w 3 years ago

      There's at least an Ireland-shaped gap in "while remaining intact".

      Possibly also Malta, given the brief window of opportunity where they might have ended up with seats in Westminster, but I'm much less confident about that.

  • mym1990 3 years ago

    A decade(s) of poor leadership in an era of fast moving technology will do that!

    • throw_pm23 3 years ago

      That may be the case, but a simple reversion to the mean may also be sufficient explanation. If everyone plays in a globalised (more or less) even playing field, then sooner or later they catch up and the initial lead will matter less.

      • mym1990 3 years ago

        Its likely both, no one stays on top of the world forever, and globalization did well to level some playing fields!

  • password54321 3 years ago

    What a load of over generalised nonsense.

ninth_ant 3 years ago

For the benefit of innovation it actually pays off when some organizations work on alternatives to the dominant model.

What if LLMs become stagnant and some other approach is needed, perhaps in tandem with it? Work that seems irrelevant today may become useful then.

I’m not saying that their approach is this, just that failure to produce a LLM doesn’t necessarily equal an embarrassing failure.

  • martingoodsonOP 3 years ago

    The failure to do any meaningful work related to the most important breakthrough in AI ever is objectively bad.

    • version_five 3 years ago

      That doesn't follow from anything. Research is part methodical slog, part lottery, maybe a pinch of intelligence. A few labs won the short term lottery here, and most researchers explored stuff that didn't get headlines. (And to be fair, OpenAI built a great product that catapulted lab research into popular view).

      There might be some argument on other metrics - publications, students trained, lectures, recognition, whatever, that show this institute is lagging. But not being part or llms implies nothing about their success or failure.

    • mnd999 3 years ago

      LLMs are not the most important breakthrough in AI ever, in the same way the NFTs are not the most important breakthrough in digital commerce ever. It's just a load of hype to generate big funding rounds. At least there's no cartoon apes this time around.

      • sva_ 3 years ago

        I agree that LLMs are not the most important AI breakthrough ever, but your characterization seems needlessly harsh, as LLMs undeniably have utility.

      • extasia 3 years ago

        The transformer architecture is arguably the most important breakthrough in NLP, and language is the predominant mode of communication between humans, so I fail to see how its "just a load of hype"

        Could you name a bigger breakthrough in AI?

        • sva_ 3 years ago

          I'd say the Multi-Layer perception itself.

          Maybe even convolutional neural networks, because they showed that ANNs are viable and are what really got the ball rolling.

          • mafribe 3 years ago

            CNNs are from the 1980s (the "neocognitron" by Kunihiko Fukushima [1]), while the MLP is from 1958 [2]. So nearly half a century, resp. a century old.

            [1] K. Fukushima, Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position.

            [2] F. Rosenblatt, The Perceptron: A Probabilistic Model For Information Storage And Organization in the Brain.

  • nonethewiser 3 years ago

    And what alternative to LLMs does the Turing institute have?

    > I’m not saying that their approach is this, just that failure to produce a LLM doesn’t necessarily equal an embarrassing failure.

    That's not really a rebuttal against the article though. They specifically state that they aren't blaming the institute for not producing and LLM nor even for not predicting them.

bbor 3 years ago

So this article could be summarized as "man who has running a DL company for 12 years suggested that a government institution fund DL more, and is mad that they aren't"? Disregarding the dash of woke==bad. Certainly a valid opinion, but these comments seem to be treating it as much more objective than I take it to be.

Substantively, this just seems ludicrously short-sighted. If all investment was focused on the most recent AI model to have success, DL wouldn't exist. I'm definitely saving this article for the day in the near future when people realize, hey, maybe the entire field of AI & cognitive science wasn't just wasting time for the last 70 years, and maybe those ideas will also be carried on the rising tide of Moore's Law.

  • khazhoux 3 years ago

    Is this the woke==bad part?

    > Their top piece of content was “Data as an instrument of coloniality: A panel discussion on digital and data colonialism”. Do any AI specialists think this work is going to push the bleeding edge of AI research in the UK?

    Because I think that speaks directly to his point and has nothing to do with woke wars.

    • version_five 3 years ago

      I'd say that's a pretty clear example of why woke==bad. The title reads like something that was made up by the Onion in an attempt to mock a charicature. This kind of garbage draws attention away from actual research.

      • khazhoux 3 years ago

        That’s not a fair assessment. Just because this study is not useful in the technical advancement of the field, does not mean that studying social issues is worthless (“woke is bad”). Just that that research should happen somewhere else.

        • version_five 3 years ago

          It's my perception of this social justice stuff that it infects everything, because it's somehow become a magic wand that give people power over others. The whole point in this example is that resources from what is ostensibly an computer science research institute is working on fringe left political issues. It wouldn't be ok if they studied "intelligent design" or whatever the current far right is up to either, but politics has managed to infiltrate research so we're all forced to deal with cuture war stuff instead of working on computer science.

          • khazhoux 3 years ago

            I look at it like this: it’s exceedingly hard to hire strong technical people. It’s extremely hard to create cutting edge research. It’s as true here as in any research institution, especially (sorry to say it) when it’s not a top tier research institution. And so in those cases, yeah, you will find a lot of not-great research.

            And someone else mentioned, here they are paying one or two orders of magnitude less than Google and other top companies for AI/ML researchers. Should we be surprised that they don’t have technical heavy hitters on their staff, or leadership?

      • bbor 3 years ago

        Well

        A) It's a 90 minute seminar, and literally all the attendees are specialized in social science fields. Seminars aren't exactly breaking the bank I'm guessing.

        B) The reason that government doesn't fund stuff like "intelligent design research" is that that's not a scientific topic in the slightest. I'm guessing the words in the title are setting of Culture War alarms for you, but do you really think these issues aren't important to the health of our society? Even more so with the advent of intuitive AI?

        Maps and surveys of "new worlds", passport photos and vaccination cards to control the movement of "impure" bodies, accounting spreadsheets used in plantations of enslaved peoples... all of these technologies suggest that data has always been an instrument of colonialism. But can the history of European and American colonialism also help us interpret contemporary phenomena like algorithmic racial violence, quarantine apps and vaccination apartheid, the injustices of the gig economy, and disinformation campaigns that threaten our democratic futures?

        https://www.turing.ac.uk/events/data-instrument-coloniality

TaylorAlexander 3 years ago

Odd that the article author lists “Data as an instrument of coloniality: A panel discussion on digital and data colonialism” as an example of them doing nothing - digital colonialism is an important subject and one that many larger institutions seem afraid to cover.

  • martopix 3 years ago

    Indeed, and it's a topic that would not be covered by big tech. I think this idea that academia must compete on the same terrain as big AI companies is misguided. There are areas where only academia can be successful, because they would not be profitable, and others where academia doesn't have the resources.

    • midiguy 3 years ago

      Yup. I think it will be pretty important for academia to serve as an intermediary between the possibilities being presented by big tech on the one hand, and the concerned regulatory bodies and lawmakers on the other. It's the only place you will find individuals who actually understand stuff like LLMs without necessarily having a vested interest in any one firm.

TheCaptain4815 3 years ago

Didn't the UK basically have that with DeepMind until Google bought them out? I'd suspect DeepMind would be a fantastic competitor to OpenAi today had they not sold.

  • extasia 3 years ago

    Deepmind are still based in London. In fact they're pretty near to the Turing Institute HQ.

908B64B197 3 years ago

Am-I the only one that finds is odd how the British government brags about Alan Turing after what they did to him? Having a government research center named after him seems particularly strange after what they had him endure.

The state forced him to undergo chemical castration because of his homosexuality. Same state kept his achievements and contribution to the war effort a secret up until after his death, so they could persecute a war hero without the public knowing about it.

Crazy to think he was convicted in 1952. Same year Elizabeth became Queen and head of state. She could have simply overturned his conviction. The man saved women, men, children, of all races and orientations from an horrible end. Had he not cracked the enigma's cryptography, there would most likely remain nothing today of the crown that persecuted him. Blown to dust by the Luftwaffe.

If only the British government had extended the same humanity to Turing himself.

nologic01 3 years ago

Imagine how people might feel about the "departed AI train" in any lesser European (or other) country that would not even remotely dream of having an Alan Turing Institute.

It is such self-inflicted misery. The bright future for the UK would have been as a core member the EU, helping shape a large economic space with massive amounts of talent, happily moving around the wonderful cities, taping the endless cultural heritage and building a digital society congruent with the European way of life and values (which in various important ways differs from the US). While the nationalistic reflex is not as strong elsewhere in Europe it is still a hindrance that shows up in countless frictions.

In any case, LLM's are just another stop of the journey. If people stopped digging while in a hole there is always a way out.

  • TMWNN 3 years ago

    What utter and complete nonsense. What evidence is there that a UK in the EU would be in any, way, shape or form better off in terms of AI? Zero, zip, zilch, nada, nothing.

    Or anything else, for that matter? The current economic woes of Britain are no greater or worse than that of its western EU peers, contrary to the repeated prattle of the UK having committed "economic suicide". Or, rather, if the UK has committed suicide, so has the rest of western Europe.

    • nologic01 3 years ago

      Well if you hate what the EU stands for I don't think there is any "evidence" that would sway you in any direction.

      That people in Europe have concrete and non-trivial views about important matters is fairly evident if you simply check what kind of laws and regulations they are passing. So there is a distinct vision.

      That there is plenty of research talent that is educated to cutting-edge level with citizen tax money is also pretty evident if you check who has been behind some important such AI "innovations". So there is also capability to execute on the vision.

      The weakness of the EU is that it can't (not fast enough anyway) build the structures (internal markets etc) that will overcome nativist instincts. To accumulate the resources and critical mass where it is needed. Brexit made everything just a little bit harder.

      The UK leaving the EU means, for example, that its cutoff from academic research networks. People are by now even studying this effect quantitatively [1]

      Brexit also means that the entire financial system in the EU (which was very London centric) is in shambles. London as a financial center is declining [2] with implications for the shape of Europe's capital markets, the developments around new forms of digital finance, fintech etc.

      So, yeah, go on with your livid denial of how disastrous that fateful decision.

      [1] https://pubs.aip.org/aip/cha/article/30/6/063145/1030255/Ana...

      [2] https://www.bbc.com/news/business-63623502

  • matthew9219 3 years ago

    Your premises are deeply mixed into your argument here.

    From an (conservative leaning) American perspective - the Alan Turing Institute is an perfect example of broken European thinking. Real innovative happens at highly motivated private companies, private companies who have huge profits to make if they are successful, and who pay huge salaries to talented employees, talented employees who desire to be rich. Government institutions more frequently pay mediocre salaries, reward politicking rather than excellence, and are undermotivated to solve problems, because they are unlikely to receive commensurate share of the benefits.

    Now that Britain has left the EU, we Americans hope they will stop pissing money down the drain on useless government programs like the Alan Turing Institute and will instead reorient to a more business friendly environment, which recognizes that innovation happens mostly in successful businesses, and that governments role is to create that business friendly environment.

testemailfordg2 3 years ago

1) Need of society 2) Necessity for survival 3) Individual determination 4) State priorities

I think one can trace back most of the inventions in human history, to be an outcome of a scenario where more than one reason above is true.

So going by this, even if we had a determined individual in the Alan Turing institute, but none of the remaining three scenarios were true, his chances of success are slim in comparison to his counterparts at other places like Canada or the US.

Look at china and you can easily find an environment where a determined individual under state priorities gets funding abd support for critical technologies.

Probably now the state priorities will change and in future you can expect some new breakthroughs happenning in the institute as well.

currymj 3 years ago

the criticism of the Alan Turing Insitute's past failure to become a world-leading AI research institution may be fair, I don't know if they really missed an opportunity or not.

however after this, it's a somewhat confusing post if you click the links, because after criticizing the government for not being willing to focus on LLMs, he links to a press release about 100 million pounds being devoted to training LLMs.

it's unclear to me what the objection is to this project, or what is meant by "open source" that is different from a normal publicly funded government research project.

  • martingoodsonOP 3 years ago

    Sorry for not being clear enough. The point is that this £100M is very likely to be spent on the Turing Institute since it seems to suck up all AI funding in the UK. It will therefore likely be wasted.

coastermug 3 years ago

It looks like they recognise this failure “ In 2023 that purpose remains unchanged, but we are reassessing our founding assumptions and setting a course for the next five years”.

Quite what that course is, remains to be seen.

My personal experience is that the UK loves giving out funding to established/safe organisations that it knows won’t cause a (positive or negative) splash.

I’ve seen very few UK government initiatives funding genuinely small organisations, but of those that I’ve seen, they have been well executed.

arethuza 3 years ago

Not to be confused with the Glasgow-based Turing Institute which did do some very cool stuff back in the 1980s/1990s:

https://en.wikipedia.org/wiki/Turing_Institute

Founded by Donald Michie who worked at Bletchley Park!

https://en.wikipedia.org/wiki/Donald_Michie

  • DonHopkins 3 years ago

    I worked at the Turing Institute in Glasgow in the summer of 1992, and just recently ran into an old colleague who was one of the original founders, who confirmed a funny story I'd heard:

    Donald Michie once overheard his secretary explaining to someone over the phone how to pronounce his name, in a thick Scottish accent:

    "It's Donald, as in Duck, and Michie, as in Mouse."

    He was so pissed off, he refused to speak to her for a month! ;)

    The Turing Institute put on the First Robot Olympics in September 1990 (check out the cool retro robot photos!):

    https://en.wikipedia.org/wiki/First_Robot_Olympics

    My favorite photo is the walking pizza box fascinating the kids:

    https://en.wikipedia.org/wiki/First_Robot_Olympics#/media/Fi...

    • arethuza 3 years ago

      One of my favourite's was the name of the C to PostScript compiler PDB - being Glasgow: "Pure Dead Brilliant".

      Edit: I never did use it, I'd got quite happy writing PostScript!

martopix 3 years ago

I feel this isn't quite a failure of the Alan Turing or of the UK, but rather of university research vs big tech -- the latter has a significant advantage in this specific field, for a number of reasons. There was an interesting paper about that and relative discussion here: https://news.ycombinator.com/item?id=35564013

feintruled 3 years ago

This is a bit like the CIA totally failing to spot the imminent collapse of the Soviet Union.

karaterobot 3 years ago

But they are kilometers ahead of everybody else on predicting what prices NFTs sell for, and understanding the role of coloniality in data. It's a tradeoff, really.

29athrowaway 3 years ago

And the Human Brain project is also another large failure.

nonethewiser 3 years ago

Europe is great at making laws but not much else.

anotheronebytes 3 years ago

because their real job is to preserve the knowledge, not innovate or invent new stuff.

but if academia is not the institution in charge of innovating, then which is it?

I'm still a bit confused about this. I saw it like a contradiction but do not know what to do about it.

PhD degree programs telling "come innovate" into the institutions whose real job is to preserve knowledge?

nurettin 3 years ago

Why should they copy everyone?

computing 3 years ago

so did DeepMind, excuse me, Google DeepMind.

realjohng 3 years ago

Oh boy, jarring coming out of that dark background website

m4rtink 3 years ago

I guess this is karma at play, given what Britain did to Alan Turing ?

everyone 3 years ago

Does anyone else think the Turning test is a bit silly and outdated now? Like ChatGPT could probably pass it yet it is clearly not intelligent.

  • bbor 3 years ago

    Turing explicitly designed it to account for this EXACT concern. From the "Turing Test" Wikipedia:

    > ...the question has become "Can machines do what we (as thinking entities) can do?" In other words, Turing is no longer asking whether a machine can "think"; he is asking whether a machine can act indistinguishably from the way a thinker acts. This question avoids the difficult philosophical problem of pre-defining the verb "to think" and focuses instead on the performance capacities that being able to think makes possible, and how a causal system can generate them.

    The whole point of the test is that we'll NEVER build a machine that we think is "intelligent" or "thinking", because those words mean about as much as "love" or "meaning" in a scientific or engineering context.

    Maybe the fact that we've built computers that pass the test means that we're close to AGI? Worth reexamining some priors :)

  • skybrian 3 years ago

    What people loosely call “passing the Turing Test” isn’t much like Turing’s imitation game, which is much harder, assuming the humans are playing to win.

    More: https://skybrian.substack.com/p/done-right-a-turing-test-is-...

  • dmreedy 3 years ago

    In addition to bbor's comment, the idea you're scraping up against is the much-discussed P-Zombie[0]

    https://en.wikipedia.org/wiki/Philosophical_zombie

  • version_five 3 years ago

    Yes I said something similar recently: https://news.ycombinator.com/item?id=35861837

  • voakbasda 3 years ago

    Most humans can pass it and are clearly not intelligent.

    • bbor 3 years ago

      A question only someone who frets way to much about AI would ask: are you serious? Or is this just a "other people are dumber than I am" joke? Would be very interested to hear your stance, if the former :)

      • voakbasda 3 years ago

        Partly serious, from the perspective that the goalpost for "clearly" acknowledging AI have been raised every time the previous goalpost was achieved. Presently, we have LLMs that converse better than most humans, but most people still say they are not artificial intelligence.

        In that regard, I think it fair to say that the criteria for acknowledging AI now effectively exceeds the criteria for being a human. In situations like the Turing Test, I'd wager most humans would already comes across as inferior in conversation as compared to our best LLMs.

        If the LLM is not intelligent yet it still appears more intelligent than us, then it seems mathematically that our intelligence must be zero (or less).

        • bbor 3 years ago

          Very cogent, thanks for the response! I definitely agree. My simple solution to this question is to go back to Turing himself, who designed his test with explicit goal of avoiding terms like "intelligence" in favor of more meaningful ones, like "similarity" or "capability".

          Anyone who says that GPT(4) isn't intelligent when it's turning out long, well-reasoned essays on demand seem a little stubborn, IMHO...

yieldcrv 3 years ago

The sadder thing is that someone was once enamored by this institution enough to care.

Maybe this is good for the UK taxpayers and parliament to know. I don't think the rest of us - or anyone - should spend any more energy on this. Most publicly funded institution engaged in an affinity grift. News at 11. Stop funding it, the end.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection