In design, predictions are made — about how people will use a thing, how it will benefit the business, and so on. Every prediction falls on a spectrum ranging from good to bad. Good predictions are based on evidence (interviews, subject matter expert input, user validation sessions, etc.) Bad predictions are based on assumptions (whim, opinion, etc.) Either way, what we produce in a design phase (however long or short, large or small) is always based on a prediction.
It’s when we deploy a product or feature that we can finally move from predictions to actuals. We can study how people actually use a thing, how it actually affects the business, whether it is actually valuable. From these insights, we can learn new information, form new hypotheses, plan new tests, draw new conclusions, and deploy new iterations.
In the ideal product development scenario, we treat design as a science: we pull from various resources to build intelligence to inform a hypothesis, then we test it, and then we study the results to inform new hypotheses. Experiments, then, and not sequential phased processes, become the pathway to innovation. In this ideal scenario, we optimize our teams for continual improvement. Rather than build a thing and wash our hands of it, we build a thing and learn from it.
In that ideal scenario, we accept that design is a prediction, and we treat our predictions as predictions, and not as completed efforts. In this scenario, we accept that we can only know the truth after deployment, and only by studying the outcomes post-deployment. Predictions, when formed well, are positive. But predictions only become lessons after reality has its day.
About 15 years ago, I was involved in a usability study of about 300 users on various design patterns specific to the whole Web 2.0 concept (interactions over static info). We tested tag clouds, infinite scrolling, configurable start pages (My Yahoo, others), and several other things. Almost across the board, these paradigms that had become massive trends were almost universally either despised or misunderstood.
Tag clouds, for example, commonly translated to people as expressions of what designers wanted the user to click vs the reality that they captured trends of what was clicked in the past by other users. Infinite scrolling commonly led to users mashing down the arrow keys on their keyboards in an effort to reach the bottom of the page. People hated this stuff, and yet, you could find these elements on site after site after site.
This was one of the moments during which I learned the value of studying behavior. All of these ideas were interesting in concept, and terrible in practice. Everything is hypothetical until you build it.
Like what you see here? Learn more about how we can help fix product design and build a culture of innovation within your organization.