Ask HN: How do you manage delivering dashboards between data teams and others?
Full context: A friend and I are working on addressing a pain point that bugs both of us. There is a painful disconnect that exists between understaffed data teams who clean and build dashboards/reports AND the non-technical teams who end up consuming those reports.
To probe further, we ended up doing a lean startup style "customer discovery" interview process. We ended up talking to 35+ people in the SMB space which validated some of our initial hypothesis but also not giving a strong signal on the top pain point.
So, what are the challenges you face? I would love to hear how you are dealing with these challenges. If you prefer to shoot me an email, you can find my email on my profile.
Thanks in advance! My experience is that ascertaining exactly what the report consumer wants to see on a dashboard report is, by far, the most difficult part of the process. Often the consumer would want a visual way to confirm a vague feeling. By the time a data team can ascertain what the consumer actually needs to see, collects and wrangles the data, and builds the dashboard, the consumer may have lost interest or become frustrated and moved on. Identifying a feeling and graphically representing it with data is an extremely difficult thing which led me to a place of great despair at a previous job. Anyway, I don't have a good answer for this but I hope you can crack it and become wildly successful doing so! sorry a bug took me down, so could not reply earlier. Thank you for your reply and a few follow-up questions and observations.
1. I do agree with you on the challenge between "feeling" vs "what i actually want". Is this about spending more time with the report consumer to understand what they want to see? Or is there something more?
2. How did you solve this problem? I would love to learn how you addressed this, despite all the struggles.
3. If you were to revisit this same problem today, what would you do differently? In my experience, small teams especially like to fall into the "any data is good data" trap, where they build a bunch of big, beautiful dashboards full of irrelevant, misleading data. A solar company produces a bunch of time-series lines with electrical jargon and pretty icons that maybe 2% of the customers actually understand. The devs making the dashboards don't always know what the data actually means, how it's measured, what the precision is, what the nominal range should be, etc. A marketing team ends up with something that looks like Google Analytics had a baby with NASA Mission Control, with big scary numbers, awe-inspiring trendlines, and fancy charts that your math teacher wouldn't even be able to name.... but that don't result in anything actionable, because nobody can put it all in context. People can't really focus on more than 2-3 things at a time, and ideally even less. If you just give them a big dashboard full of pages of all that pretty data you've massaged, IMHO you've met your needs (how do I present my collection) more than theirs (what do I need to do differently to improve performance). More than the raw numbers, they have to be able to derive useful insight from the data, and that requires both deep domain knowledge and some rudimentary understanding of statistics, but many dashboards aren't really set up to highlight actionable things. They might show you a % change from last week, but are they showing the outliers (most weeks we see +/- x% change, but suddenly there's this one data series that shows a 400% gain... is that highlighted? alerted?). Does it also show changes in the rates of change? Has the variance/IQR/etc. of a relatively stable dataset suddenly changed? Has a trendline elsewhere reversed? Does the dashboard easily show the big-picture highlights (the "this is interesting..." tidbits) to facilitate big-picture discussions at the weekly meeting, but still allow deeper drilling-down during someone's individual time? How does the data presented help with a SWOT analysis, or even just the question of what people should look into that week? If it's the same bunch of numbers and charts over and over again, eyes will just start glazing over. It's the anomalies that really matter, but teams don't always know which ones to watch out for and how to use the proper statistical tests to measure them. Software can help with that by abstracting relatively complex stats into UI elements (colors, highlights, warning, notifications), but you still need someone with the domain knowledge and statistical know-how to set those up in the first place. A dashboard is just a prettier spreadsheet unless it's smart enough to surface business concerns. A basic stats background (like even just an intro online course) can go a lot further than a dashboard that only uses basic arithmetic, because stats can dramatically improve the signal-to-noise ratio of a dataset. Sadly, many teams and devs in my experience don't think that way ("what useful signals can I derive from this noise, and what are the statistical tools I need in order to do so"), and instead just think "how do I make sure all the data is here, and how do I fit it all on screen". They focus on quantity and completeness, which is often the opposite of what they actually want (a good signal to noise ratio that highlights actionable things, not just a mountain of overwhelming data). A good dashboard helps them focus on what actually matters, and a poor one just adds to their stress... sorry a bug took me down, so could not reply earlier. Your comment hits the spot.
I agree with you about not just seeing raw numbers or some kind of trend line, but get more analytical insights into what is really happening with the data and the systems feeding that data. However, I am also skeptical that most humans are terrible at understanding stats so any information based on statistical analysis must be dumbed down to ELI5 level so most report consumers understand. How did you approach this problem? Shoot, hope ya feel better! > However, I am also skeptical that most humans are terrible at understanding stats so any information based on statistical analysis must be dumbed down to ELI5 level so most report consumers understand. For most regular people, I think stats is kinda like code: As soon as they see it, their eyes glaze over and they stop paying attention. I don't think they should even see the underlying math, just the abstracted conclusions. For real-time data, that might be a simple traffic light (green/yellow/red with colorblind-friendly symbology) showing when something is within expected ranges, turns yellow when something is 1 standard deviation away, red when it's 2+, blinking and screaming when it's 3+, etc. The underlying stats have to be carefully analyzed based on the actual use case with proper domain knowledge, but the UI can simplify that down to "everything is ok" vs "keep an eye on this, it's starting to look weird" vs "wake everyone up in the middle of the night to deal with this". For time-series, maybe it's a colored background range showing the expected the highs/lows for those few months, under the actual data line that plots the real data. If the line is within the range it's fine, if it's far outside the expected for that time period, something might be off. For deltas, sparklines or font bolding/coloring/sizing can give a visual indication of the magnitude of change, either relative to the previous time period (this metric is 2000% higher than yesterday) and/or relative to other metrics in the dataset (everything else changed +/- 10%, but this one was +50%). The downside to this approach is that it requires actual domain knowledge, an understanding of each metric and their applicability to the business, a customization for each report viewer based on what their job can do (i.e. what actions can they actually take in response to which metrics), and a lot of filtering, analysis, testing, and further iterations. It's a far cry from a 1-click auto-generated dashboard based on some standard dataset (like web analytics). In one of my jobs, we used https://lookerstudio.google.com/ quite a lot because it also allowed the report viewers to edit the dashboards on their own (for simpler things, or to change pagination and layout, etc.). But we'd have pipelines in the middle that would ingest raw data and produce statistics for the dashboard. But we approached it like any UX problem: not metrics-first, but user-first. We talked to them about why they wanted a dashboard, how they consume the data, how they triage the metrics, how they respond according to the metrics, how they like to be notified or not, etc. It's a very personalized approach that tries to mimic what a good assistant would do, e.g. "here's today's must-know summaries for you" vs the sysadmin approach ("here's how every CPU core and process is doing"). Fundamentally, it was about finding the very few signals among all the noise, showing as little as possible at a time upfront, but allowing drill-downs where needed. 90% of the time they wouldn't drill down – which to me was a good sign that we were able to customize the dashboard to their everyday needs. ----- I should note that this is purely anecdotal, based on my experience as a frontend dev who also had to make several dashboards, not a data specialist/data scientist. Am better today, thank you :-). Your suggestions align with how we are thinking about addressing this problem. I like the suggestion around assistant vs admin. It strongly resonates with me and points us in a direction. btw, multiple anecdotes are helpful, so thank you once again.