Under-Investigated Fields
matthewmcateer.meI think it's weird to suggest that programming languages are under-investigated, and the discussion he gives of it makes me question his conclusions about other fields. There are vastly different paradigms completely orthogonal to the hierarchy in that image the author uses. I don't think getting people to stop using assembly/C/fortran/cobol/php/whatever is a research problem.
His list regarding economics also raised my suspicion: anti-corruption, union organization, charter cities,... I couldn't really believe that no one was seriously studying these questions, so here are some results from literally 5 minutes of googling:
https://www.tandfonline.com/doi/abs/10.1080/0969229100360783... https://www.jstor.org/stable/2522141 https://ideas.repec.org/p/ess/wpaper/id2471.html
and there are even dedicated Master's degrees for some of those questions: http://www.lse.ac.uk/study-at-lse/Graduate/Degree-programmes... http://www.lse.ac.uk/study-at-lse/Graduate/Degree-programmes...
conclusion: if you're not an expert on the matter yourself, you should be very careful when claiming that something is under-investigated. maybe it's just harder to come up with useable solutions than you think, and that's why you haven't heard of any. at the very least you should spend 5 minutes to google it.
PS: be generally cautious of anyone who writes or talks as if hew was an expert on no less than 10 entirely different disciplines.
> makes me question his conclusions about other fields
I think this list makes no sense unless it is made into a wiki of some sort with community contributions from hundreds of people.
Under "Physics", he has a sub-heading "Increasing Iteration Speed of Experimental Physics". This section mentions one random startup. Yet CERN has hundreds of people actively working on this topic for decades.
They invented, built and implemented the world's first capacitive touch screen control system in the period 1972-1976 specifically to answer this need. That's just one example off the top of my head.
> I think this list makes no sense unless it is made into a wiki of some sort with community contributions from hundreds of people.
You'd just end up with a lot of noise. The person who wrote this list even has a few scientific publications, which is more background than most people who contribute to the wiki would have.
Perhaps we could create a Foundation that pays a group of scientists a modest salary to spend all their time curating the list. The public can send the Foundation any comments they want.
Of course, the Foundation can't pay enough people to do all the work itself. So they could crowd-source some of the work to people from various fields who know enough about the field to judge what's really under-investigated vs. things that are already thoroughly investigated. Those groups -- panels, let's call them -- could get together and make recommendations to the Foundation.
The Foundation employees could form panels, get their recommendations, and the synthesize that into a list.
And if we're going to all this work to create a good curated list, then maybe the government or donors or whoever could even fund some of the ideas that come out of the final curated list. Not a ton of money -- just enough to hire one or two people to work on the idea for few years. Maybe call the final curated list a Portfolio or something.
IDK what we could call such a Foundation. I guess if the government funds it we could call it the National Science Foundation or something list that. And if it's private, probably focused on more near-term stuff, maybe "VC firm".
Sounds like a really good idea.
> in the period 1972-1976
Did you mean "for decades", or "decades ago"? I'm not sure any notable result from half a century ago qualifies any of the list he's creating for modern research (without regard to the quality of that list).
I'm trying to say they didn't stop doing it. They've done it for decades.
Agreed, and Scratch is hardly the pinnacle of programming language theory.
I was talking to professor Shriram Krishnamurthi a week ago about a block-based, educational ML dialect I was making, and he told me that he believed that once a language had a type system as sophisticated as mine, its target audience should be using text, not blocks. So, perhaps Scratch-style languages are a dead-end, or infeasible beyond a certain level.
My personal hope for the "next level" of programming languages is those that use typed holes to interactively help the programmer construct the program.
Why do you think a block based language would be good for learning ML?
My impression is that blocks would be good for learning concepts without accidentally overfitting on syntax, but for writing real programs, blocks are hopelessly inefficient for input and editing. Then again, letting the user input text but automatically convert to blocks by continual parsing, might be the best of both worlds. Sort of like emacs lisp parens mode.
I had the same reaction.
There are easily thousands of people working on building higher-level programming languages/models. A very short and very incomplete list of entire subcommunities of PL working on languages that are higher-level than java/python:
1. The ML family -- OCaml, SML, Scala, F#. I think it's very fair to say that these are "higher-level" than imperative OO languages. And you can definitely get a job writing OCaml or Scala or F#...
2. A whole bunch of programming languages/primitives/paradigms aimed at making concurrency/parallelism/distributed systems easier. Erlang, X10, session types, Manticore, etc. Rust might even belong here.
3. Programming languages that incorporate resource/complexity analysis.
4. Literally decades of work on visual programming languages (which have mostly resulted in modern IDEs and teaching tools like Scratch).
5. behavioral types
6. linear types (again rust kind of fits here)
7. dependent types
8. I would also argue that systems like tensorflow and pytorch are really a sort of programming language -- they have a very different model of computation than the host language. Just because they don't have a parser/compiler/etc. doesn't mean they aren't a programming language, imo.
9. Tons of other stuff that doesn't fit in the major categories above (e.g. netkat).
10. I mean even SQL belongs in this list.
Even for language/models listed above that don't have large adoption, the ideas are often incorporated into more mainstream languages in one way or another. So there are significant projects developed in each of these types of languages (with the exception maybe of behavioral types and session types).
Higher-level programming Languages is one of the most explored areas of Computer Science -- if anything, it's an over-explored field.
This is less a list of "underexplored ideas" and more a list of "over-hyped ideas with over-crowded communities". Every item on the CS list is the sort of thing that an ungrounded undergrad research intern would want to work on.
Some of the descriptions in other fields have a similarly dilettante vibe to them. E.g.,
* Bio: math bio is a huge community and all those folks are well-trained in chaotic dynamics. you can say it's under-explored, but there are probably hundreds of people working on this right now and at least thousands have in the past few decades.
* Math: there's a section on subspace packing with a side-story about a proof assistant and the author doesn't even mention Hales...
* physics: Building machines to automate experiments is definitely the sort of thing people get paid to do whenever there's a large enough market (and even sometimes when there isn't). similarly, Nuclear-powered propulsion is underexplored... as long as you don't count the militaries of the major nuclear powers, that is.
> While it’s easy to point to areas in computer science that might be over-researched (after all, Machine Learning conferences often get more papers than they can effectively review), there are still areas that are neglected with respect to their potential benefit.
It would be worth considering what drives people towards researching particular areas, even if it might seem kinda obvious. I.e. it might be tempting that it's visions of fame, loot & prizes, but I think to most people it is obvious on some level that they personally won't get any prominent position in these fields. I think it's partly a question of discoverability (say I'm a student, how do I learn about these topics, and that there are practical ways to work on them?), partly perceived prestige.
Also, getting into a particular PhD or similar currently means thinking years in advance -- I'm not talking learning/studying here, but connections, bureaucracy and applications, having paper "proofs" you know something etc. You notice some interesting area towards the end of your studies. It's too late to move even not that much from what you're doing (e.g. move from cognitive science to computational neuroscience) without wasting additional precious years. And that for entering not particularly rosy world of academia.
Myself (not academically nowadays), I see myself searching for a middle ground between overcrowded fields (where I will probably do relative "grunt work" at best) and fields that are so obscure as to be not viable. The fear of having no steady income is too real.
life-long learning - worth mentioning the work by well known CMU researchers:
Tom Mitchell, Never-ending learning (2015): https://www.cs.cmu.edu/~tom/pubs/NELL_aaai15.pdf
Sebastian Thrun, Life-long learning (1995) https://www.ri.cmu.edu/pub_files/pub1/thrun_sebastian_1995_1...
Also this seminar at Stanford on Lifelong Machine Learning(2013): https://www.seas.upenn.edu/~eeaton/AAAI-SSS13-LML/#Schedule
Applied category theory and uncertainty logic could be added to math.
Usually things are be under-investigated because they're hard.
or useless.
or naive.
Or heterodox.
Nothing on this CS list is even close to heterodox... In fact, exactly the opposite.
This is sort of a ridiculous list. There are, possibly by definition, and infinitude of under-investigated fields. A more useful list might be a list of OVER investigated fields, such as PvNP, Deep Learning, Consciousness, fMRI, ...
What makes a language "more productive"? I feel like this term first appeared in the age of Ruby on Rails, but I've yet to see any sort of study. Now when i see the term, it immediately raises suspicions and has the opposite effect that the writer intended. Obviously the best tool for the job is the one you know how to use proficiently, but is there a magical computer language that can turn average programmers into high-performing ones?
Transfer learning is no longer under-investigated. Just look at how the NLP and CV communities get state of the art results
Transfer learning is absolutely under-investigated. The current results in CV are awesome but they only pertain to CV. There’s little underlying theory that helps you apply it to other areas. Mine, for instance, which is robotics manipulation.
There's a huge difference between "under-investigated" and "doesn't live up to the initial hope/hype".
It's possible for something to be over-investigated and also not produce results. See also: the build up to AI winters.
Coincidentally, I ran in this paper just now, on latin squares (one of the mentioned fields): https://malmskog.files.wordpress.com/2011/10/revised-math-ma...
Anyone know if there's been research into terraforming via asteroid/comet impacts and trying to simulate that?
PBS Space Time covered the idea a bit in a recent video https://www.youtube.com/watch?v=FshtPsOTCP4. They had some numbers but I don't think they referenced any formal research in that one.
That is NOT under-investigated, you just didn't find who is investigating that.
> Computer Science: Existential risks posed by technical debt
The term "technical debt" has always rubbed me the wrong way.
Most technical decisions were sound... at the time!
I agree, people from 1999 didn't predict what would be happening in 2019. But why is that considered to be some sort of debt?
That is not the only way technical debt gets introduced.
Often, teams cut corners to release a feature earlier/on time, and only make it work for the MVP use case without restructuring the codebase to fully accommodate the change. In this setting the term debt is pretty fitting.
true tech debt occurs when a business under-invests in their core technology over a long period of time, or deal with concept drift in its business. I know of multiple fortune 500's that are reliant on bespoke emulation of hardware and operating systems that haven't existed in decades, even worse the source code for the software they're running may no longer exist in any usable form.
In many modern web companies a given project has a useful life of ~3-5 years, if its still running by year 8 with a team that's been on KTLO a few things are probably true.
A: No one knows how to productively add features.
B: The business need for the project was much larger than the KTLO funding would imply.
Odds are at this point there are a long list of user complaints, year+ old feature requests, and excuses being made to the board for why some initiative is facing yet another delay.
Perhaps we should be talking about software depreciation rather than tech debt?
Let's also talk about why LTS is a dangerous idea: It provides an excuse not to update.
Many tech debt traps start with relying on an LTS version of OS or libraries. The philosophy behin that is, that software behaves like a chair: You buy it once, and then you sit can sit on it until it is no longer needed.
A much better analogy is a horse: You need to feed and take care of it daily, and you need to be ready for it do die when you still need it.
Long term systems evolution and management might be a better way to think about it. Lehman at Imperial looked into it in the 90's (https://www.researchgate.net/publication/220902836_On_Eviden...) but no one has done much since, and yet we are increasingly dependent on platforms and 20+ year systems.
No one knows how to run and manage these systems, yet we do it all the time!
Debt is not always bad.