The biggest drawback of screen-based products is that they’re screen based—they’re stuck behind the glass of your screen, with every limitation that entails.
Everything has to live inside the bounds of the screen, a screen that may vary widely in size, aspect ratio, and resolution from device to device. Can you imagine if every physical product in your home had to be the same physical size and external shape?
You can’t touch the design itself; every app, a calculator to social network to food delivery app can use only images and audio to communicate with you, leaving most of your senses unused.
Screen-based products make up for this by outdoing physical products in interactivity and adaptability. In part this is thanks to the unique opportunities afforded by a canvas that we can change instantly to reflect whatever’s most useful at the moment. As Steve Jobs famously said at the launch of the iPhone, the problem with physical products is that rigidity. They can’t change to suit the situation or get updated as design improves.
But there’s still something so nice about holding a tool meant for one specific purpose. There’s no easier interface, no better fit for human factors, than a simple tool meant for a single thing. Such tools are ”sharper” (in the product-market fit sense), if less flexible, and for most cases, that’s exactly what you need. Swiss army knives are useful, but 99% of the time, when you need a knife, you’d prefer one that’s just a simple, traditional knife.
Screen devices, like smartphones, are swiss army knives. Given that they are often used on the move for a myriad of functions, it makes sense that they’re more swiss army knife than kitchen knife. But it’s worth noting that that positioning seems to be ever more decreasing in relevance. Smartphones are no longer “mobile phones” as much as they’re our go-to devices for every situation, even sitting at home. Is a swiss army knife best for such situations?
Apps are constrained to the screen and compete for attention with other apps on a single device that, for its flexibility, still may not deliver the same kind of experience as a dedicated device might. But each of these also presents upsides: delivering products on screen means they can be more fluid, omnipresent, adaptable, localized, convenient, and powerful.
But there’s another aspect of physical products that screen-based products lack: tactility. That ironically intangible “tangible” that products have when you can physically touch, or more accurately, feel them.
One of the nicest things about industrial/physical product design is that amongst the tools at your disposal to shape user experiences is physical material. The same product transforms completely if it’s in a different material. Fabric, plastics, glass, ceramic, metal, wood, leather, and more can be combined to affect not just visuals but also the product’s texture, durability, and even smell and sound.
One of the greatest things material can do is add delight simply by subverting expectations. Something that might look hard might actually be soft, or something that you might imagine to be made of plastic might be fabric instead.
A great example of this lately has been a rebirth of wrapping speakers in fabric, but this time mobile battery-powered speakers, like this portable bluetooth one from Ultimate Ears below. Dieter Rams has on many occasions spoken out against fabric in speakers, calling them “carpets” (he mentions this in Rams), but this is a new context that I’m not sure is comparable.
Fabric in this context subverts expectations rather than confirming them, particularly given that this is a product that’s waterproof and floats. Look at that knit carry loop—almost like a loofah. Note how that knit carries over to the carry loop at the top, too.
The Google home mini is another example of this wrapping speakers in fabric specifically and towards more creative use of material more broadly. It’s design meant to more organically integrate into the predominantly soft-surface home.
It’s worth considering how impactful this physical aspect is to the visual design of our screen-based products. With screen-based design so relatively young, little has been written about the design compromises and adaptations that the shift towards virtualization necessitates.
Of particular interest, in my opinion, is this adaptation of design. How can we deliver a feeling of tactility even though our products live just out of reach behind glass?
At the aquarium, with every fish behind the glass, there's nothing like the touch tank.
One way to deliver this feeling is through movement: vibration and haptic feedback (or as Apple calls it, “taptic”). Feedback is an essential part of user interfaces, and vibration or movement can be particularly valuable. This isn’t new: such feedback has been around for as long as there have been machines or tools, in particular in the form of buttons, such as keys on a keyboard. It can be pretty difficult—unsettling, even—to press a button on a surface that doesn’t provide any response to key presses, even if it’s just audio clicks. Confusingly, despite having the taptic engine in each device, iOS’s stock keyboard doesn’t offer haptic feedback, forcing users to go to a variety of 3rd party keyboards with varying quality. The best of these is probably Google’s Gboard, which also offers swipe-based typing, too, a functionality Apple is finally adding to iOS this year under the name QuickPath.
Nowadays, no-feedback interactions are usually limited to only the worst of screen-based interfaces, like those found on airport self check-in kiosks or big box stores’ self checkout stations. In part this discomfort is indicative of a good thing: we’ve become so accustomed to enjoyable typing experiences, even on screens now, that when we encounter an input method that provides no tactile affordance or reassurance, we’re taken aback.
But vibration doesn’t really impact the biggest aspect of physical experience: this idea of material as a first-order tool for design to convey brand and user experience. We can’t physically change the tactile surface of a device (with the notable exception of braille reader, more technically called a “refreshable braille display”).
What we do have is visual and audio design, and we can use these to approximate or reference what we know from the physical world. Visual design can use physicality as a sort of metaphor to more intuitively convey how an interface works.
Another way this metaphor is used and reinforced is through visual motion. As fantasy writers know, you can make up any rules as long as they’re consistent. If we are to suggest to the user that they should consider an on-screen element to be like a physical card, ideally, we shouldn’t break that metaphor by having that “card” do something that a physical card could not. We betray the user’s confidence in that metaphor if we were to, for instance, have such a card both above and below two other surfaces that we tell them are on the same plane. Or to state the example more specifically, if that card has a shadow, the appearance of that shadow should accurately reflect the card’s position on the z-axis.
There are, of course, exceptions. However strongly we reinforce the metaphor, the user knows that the UI element still lives on a screen and it’s not subject to the laws of physics. Neither is some physical rule-bending objectionable when serves a useful purpose or when the metaphor still has value even with a compromise. For example, a text composition field could grow as the contents do if space allows, even though a physical thing could not. And besides, such adaptation and even optical illusion can be delightful, playful, or even humorous as it subverts our expectation.
So why isn’t the issue with shadow and z-axis confusion not delightful? Because it compromises the most important aspect of the metaphor, the part upon which the value of the metaphor is predicated. The point of the metaphor of the card might in this case be to organize information into these discrete “pages” that can be moved about as one would on a tabletop. If the user is confused about that critical aspect of “material” there’s no point to the user thinking of that element as a card. The metaphor is destroyed because the user has no confidence in it. They can no longer use it to logically predict and intuitively remember how the material will act, disorienting them and depriving them of agency by making the rules of the world in which they work uncertain.
The problem with using such metaphors in UI design is that it can be very difficult to keep the rules straight, turning what was intended as a interaction shorthand into a visual style that’s not beholden to the same objective of usability but instead of style consistency and visual appeal. This is even more difficult when that metaphor is used at the system/OS- rather than application-level, like it is in Google’s Material Design language.
Finally, there’s texture to consider.
You can think of physicality as these different steps: first we need to tell you that you’re touching something physical (this is done through haptic feedback), then the nature of the object itself (such as using visual metaphors of material), but lastly and perhaps most importantly, there’s the fine detail of the object: the little big details that tell you not just that you’re holding something but what kind of thing it is.
Texture tells us so much more than we might think. It’s perhaps the primary way that we determine the type of substance we’re holding, and arguably, that has more of an impact in our impression of it than anything else. If I were to hand you an object, a white, rigid square, how would you know what it’s made of? Weight gives you a clue. But by far the most information is conveyed through texture.
People commonly think of texture as a feeling on your skin and that’s definitely the main way you perceive it. But you can also see it visually—different textures just look different and reflect specular highlights differently. And that’s good, because as discussed, if we are to convey this sense of texture, we need to do so visually.
We can do so through metaphorical realism, by making the on-screen element just look like a certain material. Sometimes the most effective way to do this isn’t in the material itself, but in the style of what appears on top of it. If we make text look embossed, for instance, we might increase the likelihood the user will perceive the substance as paper.
And perhaps more impactfully and relevantly, we can convey texture through abstract style, showing texture through pattern and in artwork itself as a sort of meta-material or meta-metaphor. While this isn’t strictly a tactile experience—we’re using “texture” in a different sense here—it operates in much the same way within this context of visual metaphor. When we push the emphasis of textural expression to this layer, we prioritize content rather than medium. Sometimes, when it comes to bridging the gap between virtual and physical experiences, screens can elicit a suspension of disbelief where, as we look at them, we forget we’re looking at a screen as we focus on the content. Minimalist interfaces recognize and facilitate this magical transport, and especially for a device that’s meant to be more swiss army knife than kitchen knife, this universality is very valuable. It allows devices to serve as a blank canvas where many tasks, brands, apps, etc. can exist together, unified by the north star of serving the user’s current need.
The impact of even minor textural differences in visual style can be high, as shown in this Salt and Pepper illustration style tutorial from Google:
Another element of physicality is motion. In the real world, almost nothing is perfectly still, even if it’s just the lighting. Nothing shows the character of something better than its movement. Movement can give you insight into an on-screen object’s weight, rigidity, surface texture, and even more intangible qualities like degree of playfulness or abstraction (to what degree does the object move realistically?). Does it move itself, and if so, how, or is it always moved by an external force? When it moves, does it incorporate Disney’s 12 Principles of Animation, and if so, to what degree?
This video, produced to advertise the McDonald’s Egg McMuffin, is an incredible example of all these things combining together. It won a Bronze Lion at Cannes and is just hypnotizing to watch. Check out that squishy foam texture at :44, cheese spectral reflectivity at 1:20, and the texture on the foam rollers just after that at 1:28.
At 1:36 we see these tubes that combine the egg and the cheese. Look at the reflection and highlight on the tubes themselves. What material is this? Even though this a highly abstracted animation that is in many ways fairly low-fi, we can still tell what this is intended to be. The material is a clear solid, so it’s likely plastic or glass. Glass doesn’t reflect light like that. We know from experience in the world that glass is rarely used in such an application (in industrial food service), and moreover, it would be difficult to mold glass into that shape, let alone consistently for a few of them as is shown here. We logically conclude this is plastic, and even what type of plastic; for it to reflect like this, it should be a non-glossy, heavier plastic like those used for tupperware food storage containers, probably textured High-Density Polyethylene (HDPE) or Polypropylene (PP). We can even guess what it would be like to hold this object in our hands.
Of course, the joy of abstract animation like this is that it doesn’t need to be any of those, or even plastic for that matter. But when the artist uses this appearance, they speak a language with which we are familiar, and in so doing, they give us a reference by which we can better parse what we are seeing. It adds realism, physicality, dimension, and depth. We can almost physically feel it.
That’s how much physicality we can access through design, even when we can never actually touch something.