Posted by: karisyd | January 24, 2012

I don’t want to talk to my TV! (it’s bad design)

It seems that the latest in the design of consumer gadgets and interface design is to make us speak to the things we use.

Inspired by Apple’s Siri and the advancements in natural language processing, at the recent CES in Las Vegas a score of everyday appliances was showcased that take commands from users using gestures or speech. Proponents argue that soon we will interact with our TV, car, fridge and other everyday appliances by speaking to them rather than interacting with them physically; be that “Voice is the most natural human-computer interface – everyone knows how to use their voice”. So, the idea behind this is that I talk to my TV, instead of using my remote. This is supposed to free me from the hassles of dealing with my TV in a physical way.

But do I want to talk to my TV?  – No, I don’t!

Is talking to my TV more natural?  – No, it isn’t!

Is it more effective? Definitely not! So what is going on here?

I my view the push for speech interfaces is design rooted in a false (but widely taken-for-granted) view of our relationship with tools and gadgets in our everyday environment. Let me explain.

Rethinking our relationship with technology

In a recent conference paper, my colleague Robert Johnston (University of Melbourne) and I have made a push to rethink our everyday relationship with tools, in particular IT and other technologies. We have based this analysis on the philosophy of Martin Heidegger. Because this is not the place to go into philosophical detail, I will try to make this argument as straight forward as possible.

The traditional, and indeed common-sense view today is to conceive of technology use as a duality between a user subject who interacts with a technology object (as signified in such terms as HCI – human-computer-interaction). Based on this duality of user and artifact we carry out design, always assuming that a user subject interacts with an object. But this objectification of technology is precisely what creates the problem that I want to expose here. Bear with me!

The traditional duality view (subject<->object) overlooks completely that when we interact with our everyday tools, we do not experience ourselves as subjects interacting with objects. In fact, when everything works well we do not experience these objects at all! They blend into the background (they withdraw in Heidegger language). For example, when we are absorbed in writing a text we do not take note of the computer or the word software (that is as long as everything goes well); when we drive a car, we do not have to take note of the car, we just move around; and indeed when I go to the lounge room, I do not take note of the TV or my remote as objects that require interaction – before I know it, the TV is switched on and I watch the news.

So, what is at work here? According to Heidegger, things can take on two different ‘ways of being’. We can bring them into view as objects (as present-at-hand). When we design, inspect and reflect on things they show up as these objects. However, when we acquire a skill for using these things they tend to withdraw (they become ready-to-hand). They just function happily without the need for reflection or explicit interaction, where we as subjects manipulate an object that comes into view. When we achieve this very natural, everyday relationship with our tools, they take on the way of being of what Heidegger calls equipment. In fact, for us to expertly and effectively use our everyday tools, they have to become ready-to-hand, they cannot stay in a present-at-hand mode, where they require reflection or attention. Berkley professor Hubert Dreyfus has outlined what is at work here: Through learning we gradually acquire an embodied skill for using our equipment. We go from mental reflection, attention and using of an object to the embodied dealing with equipment, which does not require thinking or even noticing. This is what happens when we learn how to drive a car, and indeed with every all tools in our everyday environment. Neuroscience has recognised these two different modes of dealing with our environment as reflective and automatic processes.

What does this mean for design of everyday things?

Back to my initial example. I would argue that a TV is indeed one of these everyday objects. What I want from my TV is precisely this equipment relationship. I want to watch a movie or the news, or interact with content, but I don’t want to have to deal with the TV thing. What is needed is an interface that allows me to acquire the embodied skill for just dealing with the TV as equipment without needing any reflection.

Now, contrary to what the proponents of speech interfaces argue, a physical, embodied interaction is precisely what is the most natural relationship with any tool; where my body can turn on the TV without me having to make an effort or even noticing the TV. What I want is to stay absorbed in a social conversation or my current train of thought and still be able to ‘use’ my TV. I do not want to be forced to give the TV my attention, treat it as an object and talk to it. As simple as that!

Forcing the user to having to deal with everyday things is bad design! Grounded in a false view of what it means to have a natural relationship with an everyday tool. One more example: Take driving the car. For a novice interacting with the car by speech and telling the car what to do without having to learn how to actually drive might sound tempting. But compare this with the skill of an average driver; c0mpared to the effortless automatic processes at work when driving a car, having to pay attention and speak to the car is rather ineffective; it is also tiring and will simply be too slow in a lot of situations. Simply: it doesn’t work.

Good design gets out of the way.

The quintessence is that good design of everyday things needs to allow users to acquire embodied skills easily; it needs to recognise the way of being of equipment (in Heidegger words).

Good design gets out of the way and does not require the user to treat the tool as a thing. Some companies and designers understand this pretty well; take a look at this TV ad for the Apple iPad: “when technology gets out of the way, everything becomes more delightful“. Precisely!

I do not want to talk to my TV! It is bad design!

Now, I grant that the current status quo, having a number of black boxes, a TV, BluRay/DVD player, set box, receiver etc and a range of remote controls is unsatisfying. It leads to far too many breakdown situations that require attention. But a solution that cements the present-at-hand attention mode by having to speak to the thing is not the solution either. We need interfaces that are unobtrusive along the lines outlined above.

For this, we need to rethink the way we conceive of the relationship between the user and technology!

As a postscript:

Of course, speech interfaces have their place in the world, no question about that. For example, speech interfaces are great 1) wherever we need to interact with technology in a non-everyday manner (e.g. kiosk systems), 2) where technology helps overcome the limitations imposed by disability, or 3) where it supports tasks that require attention anyway, as in dictating something to a personal assistant (precisely what Apple’s Siri does).


Riemer K and Johnston RB 2011 ‘Artifact or Equipment? Rethinking the Core of IS using Heidegger’s ways of being’, 32nd International Conference on Information Systems ICIS 2011 (Winner of the Best Paper -1st runner-up Award), Shanghai, China, 7th December 2011 (read more).

Hubert Dreyfus’ article on learning and skill acquisition.

Sydney Morning Herald article on CES news.



  1. Kai, I DO want to talk to my TV and I ALREADY talk to my GPS and my mobile phone. 🙂
    Before I tell you why, let me say that I really like your ICIS paper and I hope that it will have an enormous impact on the IS discipline in the next years. We need more studies like yours that help us to better understand how we interact with technologies.
    In the case of this post I think your explanation of Heidegger’s argument is really helpful. However, I do not fully agree with your consequent interpretation. I would think that (similar to what you indicate at the end) speech interfaces have their place. For me the speech interface of my GPS makes it more ready-to-hand. I do not have to stop my car to put in a new destination, but can do this while driving (and even hearing music at the same time). And the fact that I do no longer have to search for my remote control, but can just say “channel 5” makes the TV more ready-to-hand than before. Thus, I think that in just a few years from now we will interact with electronic devices by gesticulating, speaking (and perhaps someday in the future even only thinking) in such a normal, routinized way that we might think “how could my TV be ever ready-to-hand with this remote thingy before”. 😉
    Let’s see.

  2. Hi Alex,
    Thanks for the flattering comment and your honest critique. Much appreciated.
    I can see your point. You’re right obviously, the current remote control interface is far from ideal and has been outgrown by the feature richness of today’s TVs. It just doesn’t work anymore. Quite obviously, a straight forward speech or gesture interface, like already exists with Microsoft’s Kinect, will be a significant improvement.
    At the same time, I haven’t fully made up my mind about the nature of speech interaction. In my view, speech will always require more active and conscious effort than a fully embodied, automatic process that is just executed by one’s body. To what extent speech interaction with gadgets can become truly ready-to-hand, I will have to think about.
    Anyway, my general point remains: interfaces should be designed in a way so that they lend themselves to facilitating effortless, embodied interaction. And I’m sure there are ways other than speech that might improve significant interaction with the TV, so that it can become what it was designed to be – an invisible medium, not a thing that requires attention.
    After all, with my current (old JVC CRT) TV, having learned and internalised its interface (and given that I find the remote where it should be), I don’t have to make any effort to operate it routinely. Having said that, I am sitting here awaiting delivery of our new LCD TV, which will blow this ready-to-hand experience out of the water and put me in a new unready-to-hand learning situation. Who knows, I might change my mind and want a speech interface after all.
    Cheers, Kai

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: