What is multi-objective optimization?
medium.comThis is a very low effort blog post on this subject, I'm not sure it is adding anything new to the topic and just feels like the same sort of low quality blog spam that shows up at the top of Google search results instead of a high quality introduction to the subject.
Its mentioned in the article, but what I find neat about multi-objective optimization is that (for a certain type of well behaved problem) the "solution" is not a single point (0 dimensional) like in normal optimization, but is N-1 dimensional where N is the number of objective functions. So if you have 2 objective functions the best solutions all lay on some 1d curve and if you have 3 they fall on some 2d surface and so on. This is called the Pareto front and Wikipedia has some nice visualizations[1]. It is then left as an additional exercise to pick out the best solution to your problem from the Pareto front.
A common example from engineering is optimizing for strength and weight. You may want an airplane wing to be very strong and very light and the Pareto front represents the best solutions for at a given strength/weight and then you can use other information to pick a particular solution.
There are plenty of scientific papers and wikipedia articles for any complex topic. The point of the article is instead to introduce the topic in plain english without extensive mathematical notation or expressions. The idea _is_ simple. Perhaps I should have added some visualizations of the Pareto front, but I think sometimes these graphs are shown unnecessarily quickly. Besides that what would you add to an introduction that is paramount to one’s understanding?
I think a worked example of a simple problem with some accompanying visualizations would make this a more complete introduction. Links to learn more would be nice too. As it stands it feels half done.
Alright, I see your point. My idea was to create a second article or part 2 with some more practical work using MOGAs if there was any interest. But I can see the benefit of adding a simple example here too.
Many organizations are doing multi-objective optimization without knowing it. Maybe all non-trivial organizations are doing this kind of optimization? But most of them don't know it. For example, for-profit companies care about both revenue and profit and they aren't always aligned - their entire business is multi-objective optimization with the results reported to investors every quarter (if they're public). The danger of not knowing you're doing multi-objective optimization is that it's too easy to think it's possible to treat each objective as independent, optimize each of those dimensions while ignoring others, and assume things are getting more optimal all along.
I've repeatedly seen this result in oscillations where X is optimized over the course of a year. When those efforts runs dry, the effort switches to optimizing Y for the next year without realizing most of the gains in X were sacrificed for Y (repeat). It is a lot simpler to only think of one dimension at a time, but progress can be so much slower. If they've done a really good job, they are dancing in circles near the Pareto front and, to the extent their environment is static, their efforts are going to be neutral at best.
I've also seen this turned around when it was later understood that it was multi-objective optimization. That involved hiring people with a background in mathematical optimization (operations research, game theory, control theory, statistics, etc.) to build a system that would constantly adjust system parameters to stay near optimality. They built ML models, control systems, and auction systems that all worked together. The result was incredibly different. What had been years of experimentation, often with latter experiments undoing earlier experiments, the system adapted in near real time to changing conditions. The pandemic likely would have hammered this company because it put many of their customers out of business. Instead, the system changed its own behavior to get roughly the best results it could and kept adapting as the pandemic went through all its stages. Their results are now ahead of where they were at the start of the pandemic while something like half of their customers are out of business.
A downside of automating this is that it is very difficult to experiment on the optimization system itself. Measuring improvements requires advanced experiment design and analysis, typically necessitating people with a PhD in specific areas of statistics (stats as used in vaccine trials, market economics, etc.). It is also difficult to understand what the system is doing and why. And without real constraints placed on it by those running it, a lot of damage can be done to variables not represented in the system as it cannibalizes them to optimize the variable it does know about - I suspect this is the source of a lot of the high profile damage Facebook has done to the world.
Interesting take. There are definitely multi-objective optimizations at play and as you say knowing this should be advantageous. As opposed to running it in code there is more uncertainty, partial information, less options for tests/objective evaluation etc. though. In fact we as individuals are doing multi-objective optimization every day when we spend our time to achieve different objectives (staying healthy, earning money, having fun etc.).