AI Will Eat UI
artyomavanesov.comHard disagree, at least, for the next 5 years. We’re still struggling to maintain “previous version” information density and “previous version” performance in all the various websites and apps that are undergoing redesigns.
An AI will not predict when I want to draw a construction line in Fusion360 and automatically select or display a clickable widget in a suitable and predictable manner. The correct choice is to give lots of options, visible at a glance, with hotkeys they are reconfigurable.
I lament the day that unpredictable icons line a toolbar changing as often as I change a tool. And this whole discussion happens independent of the styling desires a company wishes to exert, with smooth transitions, animated modals, etc.
Maybe I’ll eat my hat within 5 years, but it’ll be because it has been forced upon us instead of any actual user experience feedback.
Have we simply given up on QA?
Determinism in a system has been a cornerstone of validation. Even outside of software circles (the term 'test fixture' originally referred to a contraption you fitted hardware into to serve as a mock 'environment' for testing, which could fake inputs and/or register outputs).
We already see a hint of this outcome in apps we use today. Our Facebook feed which consists of content-rich widgets is the product of an algorithm's best attempt at predicting what content we want to consume and how we want to consume it. However, this article is about context, not content.
> I lament the day that unpredictable icons line a toolbar changing as often as I change a tool.
If you visit booking.com it's unlikely you'll ever see the same website twice. That's because their product team is continuously A/B testing, trying to come up with the best one-size-fits-all solution.
Although their designers' decisions are based on usage data, the data will only tell that button A outperforms button B, not why. And that's because our devices are unable to accurately measure what we're doing and what state we're in.
With the right data algorithms could make layout and even component level design decisions, but tailor them to individual users instead of user personas.
The article is a little all over the place. It's no so much about interfaces integrating AI to figure out what your next move is, rather using AI to generate interfaces for the next-gen UI that mixed-reality could usher in.
I'd say more like 50 if even that close.
This is a pretty awful take.
- AI surely has a place in design tooling, but designing UIs from the ground-up without human intervention would require a deep understanding of both human psychology and the domain at hand, which I really think would require artificial general intelligence.
- We're still leagues away from AGI.
- Is the author suggesting that UIs will restructure themselves in real time under subconscious feedback from the user? There is nothing that makes a UI more anxiety-inducing than unpredictability and inconsistency. The idea of a UI that's constantly in flux by design is, in a word, hilarious.
- That Autodesk feature doesn't even have anything to do with AI, from what I can tell. It's just a fancy constraint solver.
Your third point doesn't seem that hilarious to me. What is wrong with a UI that is constantly trying to predict what is best for you? It doesn't necessarily mean that the flux will be chaotic or unwanted. I can imagine a UI which changes slightly based on how difficult it is for me to find what I am looking for. Not one which changes so often that it is it's own problem, but one which changes often enough to reduce some of the pain points in my workflows.
Automatically changing interfaces are terrible.
Static menus are faster that magically changing menus. People want to know where to click to do a thing.
User customizable menus can be faster (when used well) than static menus.
Here's a study: http://user.ceng.metu.edu.tr/~tcan/se705/Schedule/assignment...
IMO, you'd get more mileage out of a menu with "Frequently used commands" that you can lock items into than with an AI rearranging stuff. And the algorithm is much simpler.
The hard part in UI is discoverability not repeatability. We would be wise to develop an easy to use workflow for finding commands based on what the user wants to do. Natural language processing could help here. Find a command: user types "How do I draw a box around some text". The search results should show where to find each command in nested menus, and be able to play an animated workflow example. An AI solution could help a user sift through a result set and find the choice that best matches their needs (for the box example, one way to add a box would be to turn on a border for a given paragraph, another way to get a box would be to add a free-form vector rectangle, positioned on the page).
The problem is when it changes in ways that are unpredictable. All UIs change in certain ways as we interact with them, but users are constantly trying to create their own internal model of the interface's behavior so they know how to do different tasks repeatedly. We learn rules about how dropdown menus work, for example, or which nav links go to which screens (and how to get to other screens from there).
It's problematic when that underlying model is too obscured (not communicated well enough) through visual cues. It would be actively consternating to constantly shift it underneath someone who's trying to wrap their head around it.
"By measuring our heart rate, respiration, pupil size, and eye movement, our AI’s will be able to map our psychology in high resolution. And armed with this information, our interfaces will morph and adapt to our mood as we go about our day."
Totally disagree.
Consistency is one of the primary foundations of UI, and most adaptions made today are really quite problematic.
They haven't really given any examples of either this specific prediction, or even the predictions at large.
Second is the issue of integration - our homes could all be fully automated today if we could agree on a bunch of protocols, but we don't.
Shoot - even our mail is stolen from our porches because we can't get our act together on 'lock box' situations for the neighbourhood etc..
There's no practical reason to have my pulse, sweat, stress levels perpetually monitored, let alone have that integrated into some arbitrary app so they can adjust the ads I see.
So no.
This is 'peak AI hype' kind of stuff.
The overwhelming sentiment in comments seems to be dismissive.
However, I can see how one day "AI" can easily generate automatically a very functional User Interface for enterprise software - think Oracle, SAP etc. Point it to a DB, then prescribe what UI style guide and it can totally generate a functional and perhaps even a beautiful web or mobile app.
And if we think about it - it can also perhaps generate 99% of all the websites that exists today (news, landing pages, blogs...).
AI for helping you design and test UX, perhaps. I could see ML being used to 'simulate' eye tracking (and I'll bet some startup is working on that already).
But morphing UI based on subconcious behaviour, I'm not buying it. Short of actual mind reading, there isn't enough information in those to make a change in UI be consistent. Is my heart rate increasing a sign that I want the lasso tool in Photoshop? I don't think so.
Just Plain Wrong:
Reason 1: Generative Design is a horrible basis for building UIs.
GD requires a well understood model of how the design needs to operate and be built (ie, physics + specifications + machine tool kinematics). We don't have a physics of UI design. GD makes weird, ugly stuff.
Reason 2: Do you not remember how awful Clippy was?
See: https://en.wikipedia.org/wiki/Office_Assistant
TLDR Extract: The program was widely reviled among users as intrusive and annoying,[22][23] and was criticized even within Microsoft... Smithsonian Magazine called Clippit "one of the worst software design blunders in the annals of computing".
Reason 3: Also brought to you by Microsoft: the UI Hell of Adaptive Menus in Office 2000.
See: https://blogs.msdn.microsoft.com/jensenh/2005/10/10/combatin...
TLDR Extract: Adaptive Menus were not successful. In my opinion, they actually add complexity to the interface...Auto-customization, unless it does a perfect job, is usually worse than no customization at all.
It's not about rearranging menu-items, but rather learning how an individual user could optimally interact with an application and then tailoring the interface accordingly.
If you visit booking.com it's unlikely you'll ever see the same website twice. It's because their product team does continuous design and A/B testing, trying to come up with the best one-size-fits-all solution. If this loop of designing and testing were to be automated, UIs could be designed based on individual user data rather than aggregate.
The premise is that just like our content feeds (e.g. Facebook, YouTube autoplay) are unique, so will our context (i.e. interface) become unique.
Define optimally. Seriously, there is always going to be some easily measurable value(s) that will operate as a proxy for "optimal". Even if the model perfectly maximizes the goodness of the proxy values for every customer, the model will still be wrong inasmuch as the proxies are divorced from the actual "optimal". Also, if optimal-ness is focused entirely on maximizing revenue under the current model, the result will be a an automated user-abuse mechanism that entraps the user and forces them to put up with a horrible experience just to eke out a 3% boost in ad impressions and the 0.003% boost in clicks that generates.
And I hate, hate, hate that aspect of Facebook. God forbid you should ever want to find a post a second time. God forbid I should see posts from my friends in a timely manner. Nah, show me posts from banks and cell phone companies that I will not do business with, ever, and show them 50 times.
YouTube Autoplay is a pain in my ass that I keep having to disable.
It's too bad we weren't responsible enough to manage Usenet and email in a Spam-free way. Now we've driven ourselves into a walled-garden, crappy version of Usenet with proprietary clients that serve only the site operators. At least we know where the spam comes from...
When it comes to Facebook someone is already making the decision about the optimal balance between content and ads. However, currently this decision is at least to some extent made on aggregate.
As a result, some users get so turned off by ads that they leave the platform, whereas others continue using a it because they don't regard ads as (such high of) a cost.
Will see about that.