Fashionable Problems
paulgraham.comI think pg understates the problem. It isn't just fashionable. Think about how the solutions we use today evolved.
Someone had an idea originally and certain decisions were made about that approach.. those that best adapted to the conditions of the time were successful... rinse and repeat over several decades.
Those ideas that preserved the past were more likely to succeed because they preserved the ecosystem that already existed. Ideas that diverged too much from the existing successful tech had a huge obstacle in their way: they had to recreate all of the existing solutions in their new model. So even if they would lead to a better solution over time, they may never get past that initial roadblock. And the longer we stay on the path, the bigger that roadblock becomes.
If we are all thinking of ideas around the existing model and assuming all of the existing assumptions, then we end up with similar solutions (fashionable solutions). We've reduced the solution space, leading to a limited number of solutions.
Perhaps groundbreaking/valuable tech can be found by questioning those existing assumptions; identifying where tech is still built on assumptions that are no longer true; or reexamining past solutions to see if they can be solved in better ways today with what we've learned since then...
This is called "first principles thinking" and is one of the most reliable methods for producing novel insights into a problem space. It is uncommon in practice because it has a high cost both economically, because you are re-deriving everything you think you know from scratch instead of "standing on the shoulders of giants", and socially, because you are deviating from orthodoxy promoted by high-status individuals.
This kind of relentless indifference to conformity is a rare quality in people.
There's also the cost that you are likely to just spin your wheels retreading old ground. Examples to the contrary are notable, but notable because they are rare.
I made this assumption for many years, and yes, that cost would be high. I eventually discovered that, more often than not, everyone assumed that someone must have already investigated a particular hypothesis but if you tried to identify that "someone" it turned out that they didn't actually exist. After doing this exhaustive search a few times in a few different domains and coming up empty handed, it changed my perspective on the matter, to my great benefit.
Searching for evidence that someone has actually done the work is relatively efficient. It never fails to astonish me the number of times that everyone believes a particular bit of ground has been thoroughly tread yet, if I try to find concrete evidence that someone has done the work, there is no evidence that anyone actually has. There is a strong cognitive bias (I don't know if it has a name) where everyone assumes that someone else has already tried every obvious or reasonable approach and that belief is treated as factual.
>if I try to find concrete evidence that someone has done the work, there is no evidence that anyone actually has
Unfortunately, the fact that negative results tend to get little if any publicity works against you here. And it's probably worse outside, not inside of academia. How often would a project team in some big company, or a couple of guys in a garage try out some promising alternative approach to something, fail to realize an advantage over the conventional approach, and then go out of their way to publicize that failure? That would be extra work for no - or even negative - gain.
It might just be meant to be humorous, but even "If at first you don't succeed, destroy all evidence you even tried" sounds more likely than "If at first you don't succeed, put some extra effort into telling everyone".
Would you be willing to give an example of an area where it was widely thought that "someone" had already checked all the obvious possibilities, but you were able to find an obvious possibility that actually hadn't been explored?
I think you're arguing for doing surveys of literature, which I would also argue for. My argument was against the veneration of From First Principles, which is largely incompatible with Maybe Someone Already Did That, More Completely, More Correctly, A Long Time Ago, And It's Easy To Check First.
This here is the real danger. Many people choose to reinvent the wheel and end up finding out that their approach was actually horribly flawed compared to what already existed, or they turned out not to have the resources to see it through. It takes a brilliant person with a lot of luck to make this work in an advantageous way.
Two great examples come to mind right away, both from Elon Musk. SpaceX was founded on first principles and now he's been able to undercut competitors by an order of magnitude. Conversely, his attempt approaching car manufacturing from first principles in the gigafactory has proven to be a complete disaster.
I think the older and more competitive a space is, the less likely a first principles approach will bring success.
Perhaps it’s not the age per se, but the amount of players over time. (Number of car manufacturers >> space companies).
That was my general point (without making the wording too complex)
>Conversely, his attempt approaching car manufacturing from first principles in the gigafactory has proven to be a complete disaster.
Nonsense. What is that based on?
>Conversely, his attempt approaching car manufacturing from first principles in the gigafactory has proven to be a complete disaster.
Lol yes, manufacturing one of the best selling cars in the country was a great disaster.
There are also the times when you discover a tech, a library, or a language that supposedly solves your issue, but in reality only goes 80% of the way.
Unfortunately, it can take a lot of work to then a) find workarounds (which in my experience tend to become a maintenance nightmare) b) try alternates (which may have other but similar limitations) c) pull out and start doing it yourself anyway.
So this is a worthwhile investment when the existing solutions are insufficient to the task and there is great opportunity in reshaping the landscape.
I find that mentoring a novice apprentice (both teaching general domain knowledge and describing specific architecture decisions in projects they’re getting involved in) is a great way to get back to first principles.
The expected outcome is increased human resource so it somewhat offsets the economic costs.
One caveat is that this process requires the willingness to rethink your own established models, and recognising when they might’ve been outdated or incorrect takes some self-discipline.
Mentoring is the fastest way for one to learn or relearn first principles. Explaining an idea to someone requires that you speak to their level of understanding. Thus, explaining a complex idea to someone with understanding of only fundamental truths (novices), requires that you only speak in first principles. If you cannot, you will provide a bad explanation. When you eventually are able to provide a good explanation, you will know you have grasped the fundamental truths of the idea.
It is common for the young, remember? It is also why you should vote for the young and idealistic; not cynical old farts like ourselves.
and how is that different to the NIH syndrome?
It’s the exact same thing, but successful. Part of being “brilliant” is knowing when to apply first principles and when to build on what exists.
I put “brilliant” in quotes because unless you can do this consistently, you can also just get lucky...
Many insights to be made by this comment. One that is easily missed is how hard it is to fight against the existing ecosystem/assumptions. I've found that a lot of people either don't, or make a point of challenging everything. A strategy they've worked fairly well for me so far is to challenge one(!) assumption after a lot of careful analysis, and then try to adapt to the ecosystem as best as possible. Most revolutionary things are not made in one big bang, but in many incremental steps. They only look like big bangs in hindsight.
Even the nuclear bomb was fairly incremental, with truly a big bang in the end.... (pun intended ;)
The problem is that multi-layered problem solving is superhard to execute. It's not that it's hard to find a solution that would be orders better, but it's just that it's risky, as it's an all-or-nothing path. It at least requires a lot of confidence for the existing market.
With waterfall-kind of planning, that would work. Unfortunately it got a bad name in the '90s and early '00s. The wild-west of inexperienced developers, who needed to solve business problems they didn't understand, combined with clients who didn't understand technology.
I believe this is a great period to do bigger things, and you see this happening with for example SpaceX, Tesla and Apple: space is happening again, companies are creating super specialized and complex ICs, there's actual business value and consumer value being produced, and projects are reasonably on time with these companies. This in constract with companies who, what it seems, do more of an iterative approach: Facebook, Google, Amazon. No huge innovation going there.
Aren't those companies great examples of this though? They have a great vision, but really try to play inside the existing system except for one thing. Tesla for example haven't yet done much innovation that the other car companies don't, except for betting hard on electricity (or more specifically, batteries). A more radical approach challenging the whole system on all fronts would be to re-think personal transportation (perhaps cars VS horses is a better example of that). Sure, they realize that in order to make electrical cars viable, they have to put up chargers. But that's really just a means to an end (batteries/electricity as fuel). Same goes with how they sell cars - they have to in order to break through the market, it's not something that is necessarily core to their business or why they exist. They exist to replace ICEs with battery driven mirrors. The rest can stay. (autonomous driving might be more revolutionary if they succeed and create something different than what cars are today, but that's a lot of ifs and buts)
Apple is also very much in that corner. The smartphone existed long before the iPhone. They just packaged it and polished the concept so it made sense for a broader market (and they are/were really great at that), but how radical were they really? Revolutionary after a while perhaps (if you can be a slow revolutionary - seems a little bit contradictory), but technologically they just put a lot of existing techs together and packaged it differently than the existing players for a different market/crowd and had tremendous timing (I bet the smartphone revolution was coming either way, they just helped define the concept and bridge the gap from early adopters to mass market).
You, like many others, me sometime ago including, are missing the point that can be illustrated with Apple - Jobs broke the carrier control over software that is running on the device which, until Jobs, had been a given unquestionable fact of life like the sky is blue. That is what unleashed the revolution, not the nice piece of hardware which would be just a brick under ATT control like many before.
Consider that Tesla is still an abject failure on their goal of converting the world’s transport to sustainable energy.
I don’t think the strategy you outlined will be enough. In addition to “advance batteries” I believe “full self-driving” as well as “mobility as a service”, “radically production-driven design”, and possibly “car bundled with energy production and storage devices” will be required for Tesla to hit their goal.
If you take those 4-5 things together I do believe that constitutes a total reinvention of the automobile.
An interesting theory as to why we never evolved with wheels is the idea that an evolutionary improvement requires a viable intermediate state...something that works well enough in the transition from inferior to superior. It is worth noting that human inventions can evolve in much faster timelines than biological evolution, but no matter how fast they come, there does need to be a way to fulfill intermediate needs, or else there is no path forward.
What follows is my own personal views on current technological evolution, which may be wrong. Regardless:
Electric vehicles have been evolving extremely rapidly, and now they are viable enough for most privately owned passenger vehicle use cases. Through continuous evolutionary investment, they may make their way to some commercial vehicles, maybe even small planes. But the chasm is so extremely large for large commercial vehicles like airliners and cargo ships, that we would need massive (several orders of magnitude) technological improvements in energy density in order to even start considering them.
SOFCs are inferior to batteries and supercapacitors from an efficiency standpoint. They may never be the power system of choice for passenger cars or other small scale and lightly used systems. But the thing they have going for them is their ability to evolve as the ecosystem evolves. They can run on diesel fuel, JP8, even crude. They can run on biodiesel, or renewable ethanol or methanol. And they can run on pure hydrogen. They can even run on a mix of all of those fuels. At every path in the transition to hydrogen, there is a viable intermediate state. For this reason, I think you'll see fuel cell powered cargo ships and airliners long before you see battery powered.
> An interesting theory as to why we never evolved with wheels is the idea that an evolutionary improvement requires a viable intermediate state
Not really; wheels aren't an improvement in the first place so there is no need to explain why they didn't develop. Note that it's easy to make robots that use wheels, and difficult to make robots that use legs, but we make legged robots anyway so that they'll be able to handle environments other than dedicated roads.
(More recently, we make flying robots, sidestepping the issue that we don't really know how to do legs well.)
Okay, but evolution has given us some pretty expansive diversity in traits across the biological world. No single trait is pareto optimal, but relatively superior/inferior depending on context. Wheels have some pretty extreme efficiency advantages, useful for both speed as well as endurance. They come with disadvantages too, but there are plenty of animals who never need to leave environments where legs are optimal or even necessary for survival. There are some animals that will never leave grassy plains, for example. Yet none of them have evolved wheels either. We don't have marine animals that have evolved propellers, despite an edge in efficiency.
Evolution is content with a very long iteration cycle and a very high failure rate.
Human engineering is much more efficient - building things we understand, we accomplish in decades what takes nature millions of years or is straight up not viable.
I sometimes think about evolution and its relation to human engineering, and I don’t know if it is a useful thought but, evolution created humans, and therefore, human creation is also in fact a product of evolution in nature itself.
In that way, evolution evolved itself by creating humans, and through us evolution is now happening at an accelerated rate in some aspects.
I try not to get all philosophical, because I know that other people, who are actual philosophers, have thought about these things already and I can’t compete with them but I can’t help but think about such things anyway.
Have you considered how these elements essentially detached from the body (it's why they'd be able to spin around) would grow and the like?
"Many bacteria are equipped with a flagellum, a helical propeller that allows bacteria to travel."
Humans didn't evolve with wheels because excepting the last few hundred (possibly thousand) years roads didn't exist.
How useful do you expect a set of wheels would be on the tundra? Or are you conceptualising a theoretical monster truck human? We'd be better off with tracks...
"Ideas that diverged too much from the existing successful tech had a huge obstacle in their way: they had to recreate all of the existing solutions in their new model.”
You’re so right! There are great examples of companies that preserved the existing ecosystem of solutions (Facebook with PHP, Microsoft with C++, Google with Java and C++, maybe also Amazon with Perl 5 and C++).
There are also great examples of successful companies that in a certain sense did or had to do things in an entirely new model (WhatsApp with the Open Telecom Platform, F5 Networks with their data center FPGA-powered hardware load balancers, Tesla and SpaceX, and so on).
There are also great examples of companies that did both, where they preserved an existing ecosystem but also added another ecosystem on top to fix flaws (NeXT Computer with Smalltalk and C, Stripe with the modern version of that as Ruby and Go, Twitter with Ruby and Java, so on).
Microsoft is the best example of a company that can succeed even with what seems like a crazy choice. Almost everything being written in C++ sounds insane to the point of being business suicidal, but they worked hard enough to make it work. If Microsoft can succeed in the ways that they have with C++, and if Facebook could succeed for as long as they did with PHP, and Instagram can allegedly succeed with Django[1], then anyone can succeed with anything as long as you can endure the stress that a seemingly peculiar decision might cause you. And those standard ecosystems that they chose all had immense benefits, they just also had some pretty immense weaknesses as well (Powering 500 million daily users primarily with Python scripts?? That just seems pretty crazy to me, if it’s true).
[1]Still very unclear about how much Python actually powers the web services behind Instagram. <https://instagram-engineering.com/tagged/python>.
I don't see why writing an operating system and surounding tools in C++ is a crazy decision (I'm assuming you meant windows & desktop apps by saying microsoft). Certainly not in the early 90's, but if you have a big codebase of that already, sticking with it doesn't seem crazy to me.
People make too much of a fuss over the differences between programming language. There are some important differences (whether or not memory safe is a big one, but it is hardly critical to a succful bussiness) but by and large the difference between using say php, python, etc for your program is superficial in the extreme.
The fact that many large and lucrative code bases are written in c++ should disabuse you of the idea that c++ is suicidal.
“Many” is almost underselling it. It’s literally EVERY web browser that anyone uses. I don’t know where people get these ideas. OP has a shallow understanding of programming languages, software development, and their history.
> It’s literally EVERY web browser that anyone uses.
And is a fine example of path dependence.
No one can write a browser in any other language not because C++ is necessarily better but because the activation energy to bootstrap to a useful browser is so large (for example, one must provide a smoking fast Javascript engine before one even considers the web page rendering pipeline).
It will be interesting to see how much of the Firefox codebase gets taken over by Rust as time passes.
Basically, C++ developers (like me) are stupid. We're like primitive species, stuck with C++ because we can't understand superior languages. The Lisp developers tried to bring us the light, but we chased them away by throwing rocks at them. Now the Rust evangelists are doing the same thing, but I'm afraid we might take a break from writing video games, stock exchange software, safety-critical software, CAD packages, web browsers, video codecs and the like, to throw rocks at them.
My point is: maybe C++ actually _is_ the superior language. It can't be a coincidence that all the best software projects tend to be written in C++, and not in Haskell or whatever.
> My point is: maybe C++ actually _is_ the superior language. It can't be a coincidence that all the best software projects tend to be written in C++, and not in Haskell or whatever.
If we're going by best software written, I suspect that means C is the better language than C++ by a wide margin. It would be interesting as to whether Visual Basic would be superior on that axis as well.
C++ is the better language--except that we always have to expose a C FFI because nobody seems to have enough critical mass to stabilize the library ABI. C++ is the better language--except that Apple wrote another language because they don't believe that and that Mozilla wrote Rust because C++ wasn't good enough. C++ is the better language--as long as you have a new codebase that only uses the latest features and Satan help you if you have stuff from pre-2005 because God won't be enough. C++ is the better language--as long as compile time isn't an issue.
I can go on if you wish...
I don't think C++ developers are stupid and those choosing to start new projects in it generally have really good reasons for doing so. I also think that many older projects are in C or C++ via path dependence because C++ was the superior choice when the project started and now they have far too much code to switch.
I also believe this will be why Rust eventually becomes a very important language--Rust allows you to modernize that old codebase in a piecemeal fashion.
Dunno. As a mainly C++ developer I think Rust is a real alternative because it tries to solve the same problems better. Lisp and Haskell do not solve the same problems. Huge Lisp codebases probably become hard to understand quickly, and both Lisp and Haskell don't produce very efficient code (or make it hard to produce efficient code).
Time will tell. At the moment I'm not convinced. Rust is a marginal improvement which is not worth the trade-offs in terms of immaturity, lack of libraries, books, standardization, etc. It might end up just like D. Or it might actually find itself a viable niche.
AFAICS D screwed up with its "optional" but really not optional if you want to use the standard library garbage collection. Rust contains no such mistakes, and seems to contain fewers mistakes than C++. Any experienced C++ programmer should know that C++ contains plenty of mistakes. Exceptions are very awkward, the string class is useless, hash maps are much slower than they could be (the std API enforces that), modules should have appeared 20 years ago, non-const references as stealth pointers are bad for readability, [...]
Why C++ string class is useless?
It's basically no better than vector<my_char_type> - i.e. it has no Unicode support whatsoever and very few string-specific convenience methods. Using algorithms is possible, but tedious.
Because I often work with Qt, I'm comparing it with QString, which is much more than a vector of chars. https://doc.qt.io/qt-5/qstring.html
Well, you just kinda answered your question. There is STL, Qt and Boost. Like all other languages. They do the same thing but in different capacities, you should select based on your needs. I think std::string can be supplemented for Unicode support, I am gonna say the fmt library was the supplement but I feel like I am wrong.
I used std::string and QString extensively for 4-5 years and find QString inexplicably bloated and un-intuitive for my use cases.
I don't understand your problem with QString, but anyway, AFAIK the common way to get Unicode support with std::string is ICU. ICU is quite large (~15-20 MB binary) and it breaks binary compatibility with every release.
I don't do C++ anymore professionally, but from I remember, std::locale was very useful: https://en.cppreference.com/w/cpp/locale
And QString had no magic to it other than wrapping its underlying data in ytf-16 which std::u16string does too.
It's not perfect but it gets the job done, IIRC.
> "[Lisp and] Haskell don't produce very efficient code (or make it hard to produce efficient code)."
Curious here. Are you speaking from experience -- i.e. you tried and failed -- or is this simply something you guess must be true? People write high-performance software in Haskell, and it's their tool of choice.
> People write high-performance software in Haskell, and it's their tool of choice.
I've heard of those but have never managed to see one. Do you have some examples ? (to give you my "baseline" - handling >500k messages/second on a desktop cpu for instance as this is something quite easily achievable in C++)
Modern c++ is pretty good, honestly. With the power of STL, you can so many neat things, and make beautiful code. Example: https://www.youtube.com/watch?v=pUEnO6SvAMo
It's almost like language choice doesn't matter at all...
Language choice is an implementation detail. As long as the language isn't technically unable to do the job (ie you can't write a device driver in Python), it doesn't matter much. It may be intensely interesting to practitioners, but it's not a make or break decision. Use what makes the team comfortable.
You're describing what economists and systems theorists call "path dependency". It is common that someone has questioned the assumptions, but that the cost to overcome the path dependency is higher than the gained efficiency or capability.
Meaning: the rareness of disruption is to be expected, because most disruptions fail in stifling silence.
Another hurdle to examining alternate models is “theory induced blindness” where new approaches are either unseen or written off because they don’t fit well within a more traditional framework. It’s especially hard to convince those who have staked a career on the theory that blinds them.
Which means that this is what the exploration pattern ends up looking like in practice: https://www.smbc-comics.com/index.php?db=comics&id=2866#comi...
Aside: This SMBC shall be long etched in my memory: "All the best work has been done over here! [...] The funding is here too!"
---
Here's a teaser challenge for someone who has access to HN comments data for analytics: How often does an early comment come to dominate the discussion, rather than a better comment which comes later? Assume that there is some IID stochastic process which generates comments of varying quality (play with your distribution of choice), and plot what the time distribution of "best" comments looks like under that model -- compared to the time distribution of most upvoted/discussed comments on HN (can try different composite metrics here too).
---
Provocative meta question for PG: Are (venture-funded) startups the "fashionable" idea that is over-subscribed? How would one know? (suppose we restrict the discussion to Silicon Valley)
Isn't this just evolution versus revolution?
Most of the time new products evolve from the current state. Sometimes there's pent-up demand/frustration/hostility that allows for a clean break to a new solution (even if that solution has just as many problems, at least they're not the same problems).
Probably nine times out of ten people are looking for a better buggy whip. Without any other information, improving what you have is statistically the better bet. You have a very strong predictor, but no guarantees.
Inertia seems a good metaphor too, although that too lacks a bit of the specificity you're getting at.
This post sorely lacks evidence for its big first claim:
> I've seen a similar pattern in many different fields: even though lots of people have worked hard in the field, only a small fraction of the space of possibilities has been explored, because they've all worked on similar things.
Anyone want to step in with some examples? Without them, the thrust of the essay seems to be: "If only other people understood what problems are worth working on! Especially in the well-studied areas of essays, Lisp, and venture funding! Too bad they do not. Well, goodbye."
There is a lot of evidence for what Paul says once you dig into a specific field. Taking two fields I know well: in physics, there was decades of work on fundamental questions on systems in equilibrium, while many obviously important open questions in out-of-equilibrium systems went neglected until the last 10 years or so when there's been a huge upsurge. These questions were known to be open 20 and 30 years ago, but just weren't as fashionable as a topic. Anyone senior enough in those earlier decades knew there were tons of open questions but also that relatively few people were working on them for whatever reason.
In machine learning, there are currently a lot of people working with neural networks, but relatively fewer people exploring alternative model architectures. So much so that issues specific to neural networks sometimes get framed as fundamental to machine learning itself. I'm personally exploring an alternative class of models called tensor networks with many possibilities for research directions and lots of open questions but only a handful of people work on them. One reason for working on a popular idea is that it's nice to work on a topic where you have many colleagues and know in advance that your model is likely to give good results on challenging datasets.
I know next to nothing about physics, but I do know some about ML.
I think the reasons tensor networks are unexplored is interesting: The tools and techniques for dealing with them build on those for neural networks and the theoretical benefits are not clear cut enough to gain them a foothold over the practical results of neural networks.
Forgot to mention, but here is a recent theoretical result (prediction of generalization performance giving size of training set) based on tensor networks: https://itensor.org/miles/GenerativeMPS.pdf
Agreed. There is still a lot of work left to do!
I am wondering how far the essay would go if it were not from PG.
This is basically my biggest critique of PG and various other YC leaders. Many of the conclusions, while likely be probably being more "right" than "wrong" in virtue, are derived from intuition and observations, not evidence based. It's even more ironic, given how much emphasis the firm places on evidence based thinking of its portfolio founders versus anecdotally thinking.
Great example:
"One quality that’s a really bad indication is a CEO with a strong foreign accent"[0]
The danger is that I think pg has been "right" about so many things that every time he pontificates about something it's treated as dogma. So when he's "wrong", it'll be ignored. Impressionable people (which likely fits the characteristic of many young tech entrepreneurs) will therefore be lead astray.
[0] - https://www.forbes.com/sites/knowledgewharton/2013/12/19/292...
Probably not very. I get the distinct feeling that the last few essays were very light on content/interesting insights, and did well on name value alone.
Would be interesting to actually test that to be honest. Get Paul Graham to set up another site under a fake identity, post the next say, three essays there, then submit 'em to HN under the same identity and see the stats.
In some cases it’s because the territory is mostly mapped by his previous work. “Novelty and Heresy” was a belated sequel to “What You Can’t Say”, for instance.
OTOH, seeing pg write essays again pleases me in much the same way that one might be pleased if their favorite long-defunct rock band started recording a new album.
Is this even an essay, or am I missing something? The content behind the link is a few sentences at most. Hardly an essay.
Haha, I was wondering the same. Probably not that far. But at the same time, it is important who says what, so you can't just ignore that.
To start with, it would be called a tweet and not an essay.
Two areas where I have a little experience:
Economics: There are surprisingly little people studying very large, grand, topics such an inequality. You've probably heard of the handful of economists that do. It's unfashionable due to the influence of the Chicago school and it's focus on free market principles, as well as many other factors.
Physics: Good luck getting any interest or funding if you are not studying the currently dominant theory in your field (even if there has been no progress in decades).
Both examples are wrong. Lots of economists working on inequality , it is even a hot topic. Your physics example is so vague that is meaningless. Name one promising theory/approach in physics which is not getting interest because it goes against "the currently dominant theory of the field".Tip: LQG does not count.
No, lots of economists don't. Especially considering how important of a problem it is. It's a "hot topic" because of public opinion at the moment. We have not been attacking it in new and creative ways for decades.
> Name one promising theory/approach in physics which is not getting interest because it goes against "the currently dominant theory of the field".Tip: LQG does not count.
Lack of such promising theories is precisely the point (I guess we are talking specifically about the problem of quantum gravity). Everyone is working with the same 1.5 old approaches and the progress has stalled. Of course individually this strategy is rational because if you try to think of something new and it doesn't work out (which is very probable) your career is toast. But imagine what the brainpower poured into string theory could achieve with a more breadth-first search strategy.
There are a large number of people working on inequality.
The two most common fields studying it are: 1) Development 2) Labor Although there are a number of Macro-Economists who have studied the impact of inequality on growth. The reason you may not read about it is that the relationship between inequality and growth is a largely "solved" problem. It is "unfashionable" because of that.
Is it? Can you link me to an econometric review?
The fact that the field thinks it is a solved problem is a marvelous example of what the essay is talking about!
Not exactly. It is a political problem not an economic one.
I genuinely don't understand where these ideas about economics come from. Inequality is a hot topic, and it has been for years.
Okay, but to play devil's advocate -- if you study something controversial and/or unfashionable in academia (such as economics of inequality) aren't you just going to be ignored by the community?
I think being unfashionable is more viable in fields that are meritocratic and don't require popular approval to succeed.
I have this suspicion that, in economics, trends and fashions have sometimes formed around principles whose primary attraction is that they make the math tractable.
In mathematics I'd put forward the conjecture that "for every proven theorem you could ask at least 5 more similar questions which are unproven."
For example it is proven there are infinitely many primes. Are there infinitely primes that differ by 2? By n for any n? Are there infinitely many palindromic primes? Are there infinitely many primes of form n^2 + 1? Is there always a prime between n^2 and (n+1)^2?
If this is true then, assuming there are 200k proven theorems, there would be >1m unproven but readily stated theorems which would mean it wouldn't be too hard to find areas no one is looking into.
>Are there infinitely primes that differ by 2?
Pardon the digression, but the Twin Primes conjecture was proven by a Subway restaurant worker a few years ago.
I'm not sure, I'm not an expert, it says here the twin prime conjecture itself is still unproven.
"On April 17, 2013, Yitang Zhang announced a proof that for some integer N that is less than 70 million, there are infinitely many pairs of primes that differ by N."
It bears notice that the Subway worker, Yitang Zhang, had a math PhD from Purdue.
When I was reading the essay I was immediately reminded of the concept of paradigm shift (https://en.wikipedia.org/wiki/Paradigm_shift). The gist is that during periods of "normal science" most scientists are working within the framework of the dominant paradigm which, among other things, determines what kind of problems are worth working on. But every once in a while the paradigm shifts (which happens comparatively rarely, on the time scale of decades), old problems are deemed irrelevant and everybody piles on to work on the new stuff. The original example is physics but other fields are pretty similar (in our field current AI craze comes to mind).
I might be overgeneralizing, but I feel like 10-15 years ago everyone wanted to do wireless communications, now everyone wants to do AI, and probably in a few years everyone wants to do Blockchain or something else. I kinda find it hard to believe that those people are all passionate about the field, not the massive career opportunities.
> This post sorely lacks evidence for its big first claim
Agreed.
I think the premise is true, but PG has given no actual insight.
NGO's are a good anti-example. In developing countries we know of working solutions but fashionable rules. I like computers so I'll give them to the poor.
Elon is a good working example, rather than looking at existing tech in space craft he looked at how to make them cheaper using steel and methane.
There's a Just So Story about rapid prototyping that I think fits how to break the mold. I can't remember which competition, but the winner won by working out how to cheaply rapidly prototype working models rather than tackling the problem head on.
So they didn't so much not be fashionable but created a system that allowed for non fashionable ideas.
> This post sorely lacks evidence...
It’s a blog, not a peer reviewed scientific journal.
String theory.
Seems like a lot of weirdly defensive people in this thread.
But the essay doesnt really seem that revolutionary more common sense. If you want to make an impact, dont work in an oversaturated field. The low hanging fruit is probably already picked and other people will probably get there before you do. But you also dont want to work in a field nobody cares about as noone will care. Working on a problem with proven demand, but seems "boring" and hasn't changed much recently is a good bet, as there is probably new insights you can apply and new contexts that have appeared since last time there was a frenzy for that field.
The advice to not work in an oversaturated field, if you want to make an impact, looks to me a bit like the reverse of looking for one's keys under the lamp post, because that is where the light is. We should give some credence to the proposition that saturation is the wisdom of the crowd at work, figuring out where a breakthrough is likely, and it is usually only with hindsight that over-saturation is apparent.
I somewhat suspect by the time something is "popular" it is, almost by definition, oversaturated. Its sort of like weird investment strategies (e.g. always buy stocks on mondays, or whatever I dont actually know anything about stocks). Sure some of them may work originally, but unless i am extremely well connected, by the time i hear about it, everyone else knows too and the market has corrected for it.
Perhaps another metaphor is a gold rush. Even if there really is gold in those hills, if everyone knows there is, its probably already too late to go out and buy a shovel.
That said, i agree that there is plenty of survivorship and hindsight bias when it comes to any advice on how to be succesful.
There is a difference here without the society preference and what you should do as an individual. As you are saying, in crowded fields breakthroughs may be likely (or not, due to overexploitation, rampant fashion instead of principles thinking and people digging themselves too deep into one paradigm), but in any case, you are not likely to be participating in them if you're not already a strong player. Thus, if you are doing something like research or development of new things, your marginal impact will be probably tiny.
I would put two caveats on this. First, if you are globally in a good position (world's top university, good connections to get into places) you may be positioned to meaningfully get even into a mature field. Second, if you don't want to do much research and exploration, it's reasonable to reap benefits from specialist knowledge of already proven fields. Just keep an eye on other options to transition to if everything eventually crashes.
Even in crowded areas you should benefit from wide knowledge of horizons and fundamentals beyond what is now fashionable. It may mean just more ideas and perspective. I work in NLP with an interest in AI, and I've dived fairly deep into currently out-of-scope things like rules-based NLP, symbolic AI and biological neurons, their physics and simulations etc. I hope this gives me more viability that your average deep learning guy from a moderate background.
> If you want to make an impact, dont work in an oversaturated field.
That's exactly the opposite of his point, if I understood it correctly!
Just listened to a podcast with a VC who applied this pattern to nuclear power. The big problem, which almost nobody was investing to solve, was nuclear waste. He went out, attended a number of nuclear power events, and found some people who thought they had a productive attack. They did, and after he threw some money at them, ended up with a 40 or so X return.
https://www.bloomberg.com/news/audio/2019-08-16/josh-wolfe-d...
> Even the smartest, most imaginative people are surprisingly conservative when deciding what to work on. People who would never dream of being fashionable in any other way get sucked into working on fashionable problems.
This makes no sense. The concepts of "conservative" and "fashionable" are almost polar opposites. Becoming a musician or an actor is fashionable and the dream of many. But it won't bring any money to most people who choose that career. It's no conservative choice. Instead, it's conservative to go into STEM, law or finance. But those fields are "boring". The pattern repeats inside a field as well. It's conservative to be a cobol coder or a DOS expert, and you'll certainly make money. But it's not fashionable. How come you aren't building a cryptocurrency using self driving car that has a drone port on the roof! It results in the woke fields being overrun with very smart and capable people and capital, while tons of fields that use slightly outdated stuff are ripe for harvest but nobody is around to do it.
I think here by conservative he means doing what everybody else is doing. Risk averse. Following the mainstream. Going into COBOL May have been a conservative choice 40 years ago, but not anymore. The conservative choice is to go with the herd.
> The conservative choice is to go with the herd.
That's not conservative. Being conservative means that you only follow a change if it makes sense. If you adopt some new technology, you should do it because it convinces you that it's better, not because it's new, wasn't available before, everyone else is doing it, or any other such reason. If you are conservative, you don't neccessarily end up doing what the majority is doing, as most times, the majority is following some empty hype.
That's because his use of that word is wrong: everything said in that article contradicts that statement.
I think your definition of conservative is correct, but it assumes the thing you’re trying to conserve is something like energy spent on new/different approaches. I think pg’s definition of conservative in this case assumes the thing being conserved is something like prestige or credibility.
As an aside, in my experience it’s a fairly common usage to describe any behavior that avoids risk along some dimension, which could very well mean going with the herd, if you’re trying to conserve social status.
> It's conservative to be a cobol coder or a DOS expert...
For most people entering a career in tech over the last 20 years those haven't been a consideration, never mind a choice (conservative or otherwise).
Nowadays the conservative choices in the sense that PG is talking about here are Java, .NET, Python, C++, C, and possibly AWS and Amazon. But these are also what the market demands.
You will get paid, and probably quite well, but they are not adventurous choices.
EDIT: AWS and Azure.
Is this the first PG essay that was composed on Twitter?
Many articles now are cleanly formatted tweetstorms.
Many PG articles? Which ones?
Just a general observation. Wasn't specifically answering your question about PG's writing.
Lol what do you mean a "general observation"? You stated plainly that "Many articles now are cleanly formatted tweetstorms". Which ones?
Curious questions are great, but please don't cross examine. That's in the site guidelines: https://news.ycombinator.com/newsguidelines.html
If you want to try working on unfashionable problems, one of the best places to look is fields that people think have already been fully explored: essays, Lisp, venture funding – you may notice a pattern here.
Do people really think the fields of essays, LISP or venture funding are fully explored?
I think so? They’re all expanding, of course... but none have really changed that much over the past few decades. There’s a ton more VCs than there were a decade ago, but has it really changed as a field? People tweak the formula a bit, but we haven’t seen a shift in the same way YC shifted things. To a lesser extent, pg was one of the first “tech essayists” (now it’s a genre, http://waitwho.is).
I don’t think pg is saying they’re being ignored, but rather that there’s a sense of “yup we figured it out, let’s just iterate slowly on it”.
(I will say I can think of much better examples, but I don’t blame pg for picking the three examples that sum up his entire career. He picked those three things when nobody else cared.)
Cryptocurrency ICOs showed venture funding could be innovated on way more fundamentally than "tweaking a formula". The simple ICO model may arguably have been largely misused, but it kickstarted a lot of promising research and experimentation[1].
[1]: e.g. https://vitalik.ca/general/2019/10/24/gitcoin.html, https://www.zfnd.org/blog/dev-fund-guidance-and-timeline
Mmm, no. Which is why I really don’t get this post. The advice seems to boil down to something like : “don’t work on something fashionable! Do something like VC funding with a twist, or your own take on essays!” (which seems to be very fashionable).
He's praising himself. Those are three areas _he_ explored.
I think the meaning is that they were at one time, namely, the time he got started on them.
I was wondering the same. I get the message from the post but I don't think the examples are good.
> If you can find a new approach into a big but apparently played out field, the value of whatever you discover will be multiplied by its enormous surface area.
If you found something in a big, fashionable field with a huge surface area (cough ai/ml cough), wouldn't this multiplier apply to that fashionable field, too?
Fashionable problem. Played out field.
So, essentially, yes - fashionable fields will be well explored; the essay is about not getting sucked into fashionable problems in whatever field you're working in.
So in the case of AI/ML, the fashionable new thing is unsupervised learning, self-supervised learning, reinforcement learning, GANS, etc. The played out problems are image classification, speech recognition, etc.
Well I guess you can't see AI as a played out field yet. More likely it is just starting to take off. Say you find a new exciting spin on ERPs to replace e.g. SAP solutions that will have a large surface area. At least that is how i read the linked post.
... or payment processing. Look what Stripe is carving out.
There are still many problems in payment processing. One I can think of is acceptance. Merchant fees are way too high especially for physical payments. I can't pay using card at my barber or local convenience stores here. I suppose the reasoning is that card fees make it not worthwhile accepting for amounts below, say, £5-10 (inflated for microtransactions). Doesn't make sense for them to accept a big % loss in profits just to accept cards over cash.
If we're to become a cashless society extortionate merchant processing fees need to lower.
Humanity as a whole as problem solving algorithm might be similar to particle swarm in very complex neighbourhoods and topologies.
Particles (individual humans or small groups) move around in problem space as particles with position and velocity. Each individual's movement is influenced by its best known position locally but also the the best known global positions in the search-space. When better positions are found by others, individual change their course (take hints) and move towards them.
The problems would be the same. Too much randomness and it's just a random search. Too much convergence leads to local optimums.
This advice is just so... airy. Like most of his advice. "Build something people love" Got it, now what? What do I build, exactly? Like the incredible Onion talk on startup: "Step 1: come with up an idea. Step 2: Build it. We're at step 2 - we're half way there"
I think there are still a few major breakthroughs left in our understanding and use of electricity.
I love this mini-essay.
The counter to this is that it'll be harder to raise capital for un-fashionable problems.
If you pitched "ML for sandwich makers" right now you could raise a million bucks because so many VCs are making fashionable bets on ML.
So what? Raising is not success.
It's often a pre-requisite for success. It doesn't matter how good your idea is, if you can't get it implemented and supported by a sales team, support team etc. it isn't going to work.
There are brilliant ideas out there, but very few of them are brilliant and require 0 resources to achieve.
> Raising is not success.
People think raising is a signal for success, when it's more like kissing the dice at a casino
It’s pretty successful as failures go.
I guess, if you define "success" as "managing to borrow money"...
Fashionable problems are the ones for which you'll have the easiest time recruiting employees, raising funding and generating press coverage, even taking into account that their fashionability leads others to pursue them.
Well that's the whole paradox of Fashion isn't it? You want to follow fashion but be almost at the top of it. If you are too much next year's style nobody thinks you fashionable but just crazy. But if you follow last year's trend you are also unfashionable.
The paradox is that nobody really knows what will be the next fashion and similarly nobody knows what's the next worthwhile problem-area to work with.
There's a good reason why fashionable problem-areas are well-researched it is because the results so far have been useful and promising.
My worry in solving an existing problem in a novel way is that the incumbents can catch up faster than you can scale.
If I were to take on Netflix/Disney/Twitch with some new kind of video entertainment product, they'd have deep pockets to fund a competing offering.
The lever of equity might work to attract better talent, but only if you succeed. There's a lot of risk.
Scaling rapidly also means giving up control as you seek capital. It'd be hard to organically grow and go unnoticed.
My experience working for the 800lb gorilla incumbent was that we didn't take competition seriously at all. Even stuff competitors did that would be trivial for us to replicate got put in the "not a priority" bucket. The few cases where we were forced to match a feature you were looking at a lead time measured in years, tending to infinity if it threatened the influence of a powerful department. And this is the reactive stuff. Forget actual innovation, other than a few toy projects that never escaped the lab!
However, this was justified: even the most promising-looking competitors tripped over their own bad assumptions long before becoming a threat. We saw plenty of novel ideas but they'd always be sunk by a failure to understand the basics of how our market worked - things like trying to put a complicated app with a thousand options at a point in the journey everyone is trying to simplify and time-optimise, etc.
If someone who knew the market well had gone at it seriously and solved hard problems rather than apply the usual hand-waving "tech! blockchain! magic!", by the time we'd noticed it would have been too late to respond. You'd hope more recent incumbents like Netflix or Twitch might be a bit more responsive, but corporate inertia can build up surprisingly quickly.
Well, I have a hard time thinking of a field that I believe is fully explored. History has shown time and again that it's really easy to be fooled in that respect.
Agree. It is also a good reason for having multidisciplinary interests so we might have different ideas for looking for very different solutions to problems.
In this thread a guy talks about using the Q language and then someone else jumps in and says 'it's not scalable etc.'
https://news.ycombinator.com/item?id=21854793
What if all this cloud/k8s / serverless stuff is really all piffle? What if running stuff on dedicated hardware ends up a better solution in some fashion?
It may be fashionable and overused, but I don't think we can talk about the cloud as if it was still something new and unknown. By now I think one can already do an informed decision on which to use for a particular problem.
> an informed decision on which to use for a particular problem
I doubt that really is the case. I think most decisions are made based either on anecdotes, or whatever someone happens to have experience with.
It's rare that you can accurately predict the kind of workloads you'll have to deal with ahead of time, and it's even rarer that the people making the decision have experience with multiple completely different stacks.
And I don't really think it matters that much. Some people solve the problem with distributed document stores and key value stores, other people use a big transactional database and just keep putting extra RAM sticks in their server... I don't think there's always an obviously "better" choice.
The claim that k/q isn't scalable is false for anyone curious: k/q is incredibly scalable. That's a selling point of it.
Let's write "The Online Packaging System To Ruminate About Them All" (TOPSTRATA).
Are we not eternally one packaging system short of Nirvana?
Is this a repost? I feel like Paul has already written this essay.
Edit: Oh, it was originally a tweet: https://twitter.com/paulg/status/1183687634763309056?s=20
I'd suggest focusing on people and the problems they have rather than industry, your space, the news and your colleagues. That way you'll serve real needs rather than getting a ton of street cred as you dissapear down an academic rabbit hole.
"Fashionable solutions" are a related phenomenon to fashionable problems. Solving a problem the same way as everyone else (and often expecting a better result).
Both are widespread.
"Fashionable Solutions" are major problems. An actor with lack of experience solves a challenge (a challenge coming from of a million+ times solved problem) not with a proven solution, but with a fashionable one (one that currently has some mindshare and good PR). the research most of the time is looking at alternatives that try to solve the similar thing in the same fashionable way. therefore no real research was done.
an example: a fashionable solution for a static webpage is a fashionable static site builder, a unfashionable solution is an HTML page.
People are different, but I cannot imagine how someone could seriously work on (as in: devote their whole working time to, by their own choice, as an entrepreneur, researcher or hobbyist rather than an employee or student) problems they don't genuinely love. Sure, some do it for the money, but then they probably just love money and found a promising opportunity to earn it.
It's not just people's desires and people following what is 'fashionable'. I read a while back that in Physics it was impossible to get funding for anything outside of what was fashionable, String Theory was their example.
Funny, we've just launched EssayMash.com yesterday trying to accomplish what PG writes about: further exploration into essays. We host monthly essay competitions on an important topic with $300+ in cash prizes.
What I understand from PG words is that people, even the creative ones, by lack of courage or macro vision are not a 'Elon Musk' and go with the herd building yet another crypto or yet another SAAS.
Nice to see the recent burst of writing activity from PG. Also nice to see people here putting it through the paces. Would expect nothing less.
yeah but i am afraid pg is not relatable anymore (to most of us anyway).
devoting more than 20 percent of your time to problems that are unfashionable but dear to you is ill-advised because: a. they don't pay the bill b. they take time and in the grand scheme of things spending time with others on things you all understand is better than being happy alone.
but of course there are exceptions...
b) not true for everyone. the way different people want to live their lives is, well, different.
his essays don't necessarily have to be relatable, they just have to be useful. he isn't a life coach (though I suppose that is arguable). his expertise is in tech and startups, that's where he's proven himself, and that's the area where his advice carries weight.
Paul is confusing ignorance for insight here. It's the same phenomenon whereby a weekend visit to Paris has your uncle explaining the European soul but your year in Kyoto leaves you barely able to generalize at all from your (actual) knowledge of nuance, complexity and diversity of another culture.
Everyone is trying to break the mold at various scopes.
Meanwhile, you're welcome to reproduce your late-90s success at any moment, Paul. We'll wait.
Does YC not count?
Sounds grand. There are a lot of us who just want to pay the rent. If that means a React job, so be it.
The heuristic of trying to work on what you genuinely love is not helpful or practical for most people. It sounds good, but it is just a platitude. Genuine love and fake love feel and look pretty similar. Most people naturally start loving the life they live in, if it is generally positive. Then, they make up a self-affirming, coherent narrative that justifies their emotions, decisions, and interests. If you do an AI startup, life goes well for you, and you embrace that decision and life, how can you differentiate whether it was really a genuine interest or not?
Cal Newport makes the good point that people learn to like/love things after they get very good at it, whatever “it” is.
I love Joseph Campbell but his advice to his students to “follow their bliss” may not always be optimal.
I read the AI book “Mind Inside Matter” in the late 1970s, and even though I have done a ton of non-AI architecture and software development, I have also been able to work on AI problems like knowledge representation, expert system, NLP, neural networks, and deep learning starting in 1982. I definitely followed my bliss, but I have never been world class in my profession, but I have enjoyed myself.
> Most people naturally start loving the life they live in, if it is generally positive.
The platitude is directed toward those living a life they feel is negative, but who slog through for whatever reason: fear of change, obligation to other people, etc.
Taken as feel-good, head-in-the-sand optimism, of course it's facile advice.
But imo there is genuine wisdom in it, and it's this: You are much more likely to do something well, and to continue to do it until you reach a high-level of expertise, if you don't have to force yourself to do it. People who naturally love working out are more likely to be fit. That's what the heuristic is.
I think it goes much deeper than people choosing to work on fashionable problems. Here are my thoughts
1. The more defined and mature the problem space is, the more the assumptions that underpin that field are taken as a given.
2. These assumptions may become so deeply ingrained that people become effectively blind to the entire range of possibilities.
3. These assumptions frame how the problem/solution space is looked at. Therefore the solution space is constrained by the set of assumptions (about what the problem is , how to solve it, what to do)
4. These assumptions are recursive, in that they are contained within other assumptions. It is turtles all the way down. At some level, someone working in the space may not even understand what the core assumptions are. We have to have these assumptions though. See the next point.
5. The interesting thing about this is: The constraining of the problem/solution space is actually a positive. It enables co-ordination and incremental improvements and refinement. It allows people new to the domain to quickly get productive.
I like to think about it this way:
Picture yourself in a massive area that is pitch black. You are grasping around and can not see much. Someone figures out how to get a tiny fire started. With this tiny fire you get to see a little bit. Using this you can build a bigger fire illuminating more of the area (but still leaving the entire space unexplored). This eventual results in the ability to build a permanent light illuminating a specific corner of this space.
This specific space with light illuminating it becomes highly productive, people can do all sort of things. Like read etc. Yet, there are still areas left unexplored. The light cannot simply be taken across. It takes work, and it takes turning your back towards the current "lit" up space, and taking a step back into the dark. A scary thought for some.
5. Importantly: These assumptions have been inherited from the past. So they existed and were relevant at a specific point in time. They may or may not be relevant as of today. We would have to peel several layers to get to the core.
6. While 4 is a positive, it is also a negative. The idea/areas greatest strength (maturity, constant improvements, efficiencies) is also its greatest weakness (constraining the search space)
To take a step into the dark, is to turn your back to the lit up parts. You have to question the underlying assumptions and see if they are still relevant. If you discover an assumption about the world that is no longer accurate, then you found a new space to illuminate.
To put it in another way, to explore the dark is to shift your perspective on the problem/solution. It is to see with "new eyes". Initially it may be dark, but slowly with diligent work, and passion you could light up a completely new and novel area.
Area man write blog post saying things are great and just so happens to overlap exactly with what he does
> Even the smartest, most imaginative people are surprisingly conservative when deciding what to work on. People who would never dream of being fashionable in any other way get sucked into working on fashionable problems.
How can a statement like that be made? Is there some kind of authoritative directory of 'the smartest, most imaginative people' being 'surprisingly conservative when deciding what to work on'.
Implied I guess Paul means 'who I've met or who I know of'. So then say that.
It's a big world out there. Who knows what anyone is working on or what they are thinking or have tried and why they haven't pursued it.
This is a bit like saying 'people love their dogs and will do anything for them if they are sick'. Just a general statement of opinion by one person (and generally accepted as being correct) but based on not anything even close to being scientific and/or backed up by any actual data. That part is fine. But if that is the case state it as such and not some absolute. Why does this matter? Because when someone like Paul writes something it will be taken by others to be some kind of important thought or fact.
Most of his essays have that wise old man who has seen the world kind of vibe to them. If anybody else writes the same things without the success he has had it will be ridiculous to read.
I don't think anybody who reads his essays is looking for any scientific report based on facts. They are looking for some sort of confirmation that they are not crazy when they have similar thoughts.
It's in some ways the writing equivalent of the NPR calm and measured voice the intended impact to make it more believable and important and entirely rational sounding and correct.
Silicon Valley’s biggest problem is that is tends to see problems as opportunities and opportunities as problems. There is no opportunity in that problem.
As a long-time pg supporter, it pains me to say this: I think at this point pg could write anything and it would show up immediately with critical acclaim.
It was more charming when he had to work hard to make his points known.
But hey, fame, right? Just famous people things.
There's so much more to say in this case, though! How do you avoid the traps? Waving a wand like "Just love something" leaves far too much to the imagination. Pointing at a prior essay at loving your work is helpful, but different.
Often, you have to actively offend people in order to find good problems to work on. The idea that people have devoted their lives to the wrong thing is inherently offensive to them. That's a point not covered here.
For example, I imagine that a lot of people who've studied 3D rendering for their entire lives are about to feel very outdated the moment neural network renderers displace them. And that's also a good counterexample to the point that "Often, the best place to search for new ideas is a place thought fully explored." It might often be true, but it's not always true.
And then there are the in-betweens. Bitcoin was in a field both thought fully explored (crypto + finance) and also unexplored, in a certain sense.
Had the same thought.
Some of his essays, no one else could have written them and brought a fresh, nuanced perspective.
These couple paragraphs wouldn’t get any attention if not for the name of the writer. Maybe that’s fine—great writers have their share of banality—but does reveal how susceptible we are generally to brand name over substance.
I am actually surprised that PG doesn't feel uncomfortable about the 'acclaim' that he is getting for writing many of the things he has said that are not related to anything he has expertise in. Reminds me of celebrities who opine about politics with their thoughts (sometimes not always with PG).
Paul is authoritative on many topics. General thoughts about life and people are great to hear what he thinks. Why not? But he is no more special than 100000 other people who have no audience. Does he know this?
I've often thought he should do some A/B posting with his thoughts. Write something and then randomly decide whether to put it on his blog or some other place and see what the interest level is.
They're essays. The word means 'attempt' and the genre has always been about non-expertise. Montaigne was the ultimate non-expert.
> The word means 'attempt'
It can, but doesn't in this context. 'Essay' was a sort-of polyseme (like 'passion', which can still mean 'suffering') that has long since severed ties with its origin. Outside certain narrow academic discussions, the etymology and current use of 'essay' have effectively nothing to do with one another.
In any case though pg does pretend to some amount of expertise. "How to Do Philosophy" is an example.
Of course it does; it has kept a close association with this meaning throughout literary history. An essay is an attempt, a sketch, thinking out loud. It's the literary genre equivalent of informal conversation. In an essay, you discover what you think by writing it, just as in exploratory programming you discover what your program is by programming it. To say that essays aren't for non-authoritative musing is like saying novels aren't for depicting human experience.
I strongly disagree with this.
The greatest essayists are not putting a "sketch" into the world. I cannot imagine reading an Isaiah Berlin essay and saying, "this is just informal conversation."
Consider Didion, Foster Wallace, Sontag, Mailer, Orwell, Hitchens, Paine, Zadie Smith, the founding fathers of the United States via the Federalist Papers.
There is no lack in seriousness, no lack in rigor, and no lack direct purpose backed by thoughtful consideration and ample evidence.
There are _also_ informal or unserious or musing essays, but please do not lump together the entire genre of essays with a description of Medium posts.
I think we got some signals crossed. I wasn't talking about being unserious—just that you don't have to be an expert to write a fine essay.
I haven't read all the authors you list, but the ones I have support the point. They were not specialists writing about topics they were authorities on. They were good writers and thinkers exploring the topics they were writing about.
I didn't say essays aren't for non-authoritative musing. My points were:
1. 'Attempt' is the origin of 'essay', not its current meaning. This sort of mistake is so common there's a fallacy (genetic) named for it, but a more compelling read than some entry in a dictionary of fallacies would be The Genealogy of Morals, Essay 2, section 12.
2. pg's essays aren't non-authoritative musing. I don't think he himself claims they are, even in the essay on essays. Exploratory and a process of discovery? Absolutely. But these things aren't necessarily evident in the final product and in any case can coexist with the pretense to authority.
Side note:
> In an essay, you discover what you think by writing it
This is simply a description writing, whence the dictum "writing is revising".
For what it’s worth, I explained the situation to a friend, and they read the essay and said it was their favorite. They said most of pg’s essays feel exhausting, and they don’t generally like reading them. So I guess I was mistaken, since the audience is the ultimate test of an essay.
It just felt painful to see so much left unsaid. But my friend said that it was immediately injected into the reader’s mind, and didn’t need to be said.
My thought is that Paul should view his essays at least as seriously as someone who is writing an important college application essay given who the audience is (college admissions in that case; HN readers in this case among others). Or maybe someone writing an opinion piece for a newspaper that is widely read and taken as significant.
Imagine as an extreme (and not what PG is doing) if someone writing an 'essay' were to express objectionable stereotypes about people (women, minorities, certain women). [1] The point is if you have a following I think you need to take what you write (essay or not) more seriously. After all PG for whatever reason with his essays typically has several people review before posting which appears to not be consistent in a small way with the fact that an essay (by what you are saying) is about non-expertise. If that is the case why do you need others expertise to vet it before posting?
[1] If it's someone's point of view why does it bother people so much when it's expressed if they are stating it as their impression and not fact?
If anyone is taking pg's essays as what they they ought to think rather than as what he happens to think, that's their problem, not pg's. I don't think anybody is taking them that way, though; people are just conjuring this up for some strange reason. Some even suggested that HN has this response!
Actually, I think I know the reason. pg writes to maximize brevity and directness. He's always interested in the shortest logical path from A to B. When language is optimized that way, it has a force that can sound like an implicit claim to authority. So his essays land that way with some readers, and then they react with a sort of protest: who is this guy and why does he get to act like an authority? But he isn't—what he's doing is stripping out everything extraneous, and that includes the soft touches that normally soothe the reader—"this is what I think", "your mileage may vary", that kind of thing. He's not stripping them out because he wants to provoke or thinks he's an expert or anything like that. Rather, it's a matter of taste. He likes to make things minimal. The same impulse is behind Bel, or the design of HN's front page. This is someone who might spend weeks getting rid of a single line of code if he was convinced the program could be shorter.
Before I met pg I thought it was impossible for someone to talk the way that he writes. But he does. He's always going for the shortest way to put things. What's clearer in spoken language than in writing is why he does this. It's for pleasure. In person it's infectious, because he clearly does it for its own sake and for fun. It's also essentially the thought process of an essayist: someone on the hunt for just the turn of phrase that makes a point in just the right way, and then moves on.
> If that is the case why do you need others expertise to vet it before posting?
They might think of something you missed or find parts that are unclear. But I don't think he's looking for experts so much as good readers.
Actually very good reply to my reply. But I think there is another reason PG is this way. He has a math and a programmers brain. As such he would be more likely not to wrap his writing in superfluous wording and he would be more direct in how he makes his point. I am sure many of us have ran into people like this (I know that I have).
> He's always interested in the shortest logical path from A to B. When language is optimized that way, it has a force that can sound like an implicit claim to authority.
Agree which is why someone should take that into account when they write (they don't have to of course they (and PG) can do what they want that makes them happy).
I make money writing. And I do really mean 'make money writing'. I am not an author or a professional writer (what people think that means) but I earn a really good living and I do so by writing. And for the purpose of my comment here it's not important what it is that I do to earn that money. But it's considerable let's say. One thing that I do is always take into account what tone or mood I am trying to achieve to get what it is I want when I write. I don't obviously put the same effort into HN comments as I would when I am writing to make money or if I was writing something that many people would read of any importance (that I might be broadly judged on). So I guess this is my judgement on what PG has written and the way he writes. Fwiw I was not an english major, didn't study philosophy or the classics, didn't do particularly well in English class, didn't know even what you first said about essays either. My point is to me (in order to earn a living) the audience is super important.
Anyone can be direct. Fewer can be direct while having a lot of money. I'd bet the latter is on more people's minds.
That's probably a huge and mostly hidden factor. Another is the ambivalent reactions people have to influential personalities, which pg has been in this community.
> The idea that people have devoted their lives to the wrong thing is inherently offensive to them. That's a point not covered here.
This itself is in need of a lot of conversation. I think a really ugly part of SV startup culture is, in my opinion, "it's okay to offend and hurt people, and it's sometimes demanded of you to..."
And I think we can be so much better than that. Being "disruptive" this way is just a lazy excuse.
I interpreted that more as, the mere fact that a technology exists will offend some people. Automation that puts people out of a job, for example, offends a lot of people, especially those who will be most directly affected. It doesn’t mean those people don’t deserve compassion and assistance, but their taking offense shouldn’t prevent someone from working on automation (in my opinion). I think that goes for a lot of technologies.
And some not technologies. I’m a lawyer and recently there has been movement toward letting people who aren’t lawyers do things that are traditionally considered the practice of law, like creating and filing certain legal documents. Most of the documents are ones that are already done almost exclusively by paralegals and secretaries, but I regularly see articles arguing for restricting this couched in the language of protecting people from malpractice. It seems to me to be a case of lawyers who worked hard for their status being offended by change that would lower the cost of legal services and their status. I don’t think their offense should be taken into account, rather I think we should make changes to benefit people who currently can’t afford legal services.
I also don’t personally think there is any shame in spending your life on something that becomes an anachronism, so I don’t think much of the offense taking at new technology is warranted. Once upon a time, 90-something% of people were farmers, after all...
I also agree, these are short blips (a long form tweet) with no research, reference, no real insight besides the obvious. One positive thing is that it invokes some level of discussion and often auxiliary ideas come out of it. Perhaps.
At the other end, it is laborious to go through a well researched long-form articles.
I really like Drew Devaults blog [1] - I think he strikes a great balance between tweet-like articles (pg, daringfireball) and something extremely verbose (Stratechery).
But hey, fame, right? Just famous people things.
It's not just fame, HN is PG's site. It would be almost rude if his posts didn't show up high here.
Yep - Read my comment (did not read yours when I just wrote it).