John Underkoffler designed the hands-on UI for "Minority Report" way back when. He's since been developing a real-world version of it. And has managed, through sterile corporate site design and a pitch full of businessy catchphrases, to make it look incredibly boring. But it's not! To be fair, his 2010 TED Talk made it look way cooler and more widely applicable.
There is a fantastic tumblr that contains all the ridiculous UIs from movies: http://fakeui.tumblr.com/
Wow, you were not kidding. That could have been presented in such a more interesting way. It looks like something Bill Gates would have presented and it should have been Steve Jobs. If that makes any senseā¦
That makes perfect sense. It's kind of distressing to see how quickly a wonderfully outlandish idea can be assimilated into the most banal trappings of the everyday. Way to suck the magic right out of the world. But like I said, the linked TED Talk does it more justice.
The ergonomics and haptic feedback of the Minority Report UI was fucking gawdawful. You would never work that way - the whole point was that it looks whiz-bang for people who don't know what they're looking at from a contrived angle will be impressed. What they have here is basically a video wall missing most of the pieces with iPad control bolted on. I've been mixing on video walls very much like this for four or five years now; the difference being they're programmed and controlled by people who know what they're doing. They have iPad apps, too.
I could imagine making an errant social gesture and accidentally deleting all of the progress on whatever work you're doing. On top of that, it looks really neat for reviewing visual-based material, but I have no idea how it would translate to creating, like, a report or something. It would probably take forever and be full of typos. I do think the general principle is cool, though- this idea of doing away with barriers between the processed information and the physical world (i.e. getting rid of the screen, integrating the interface into your everyday environment), and turning the human body into a part of the interface. That's in evidence more in the TED talk than the shitty sales pitch. Lot of focus right now on how to make a device unobtrusive enough to seamlessly integrate our computer time into our real-world time (Glass, for an extremely frustrating example). But those devices still erect barriers between processed and natural experience. The guy who builds an interface that functions without the old framework of the keyboard and screen, and which projects seamlessly into our everyday world... that guy's gonna make a mint. My friend showed me this. Seems like it might show promise.
I remain unconvinced. There has never been anything that we want "integrated" into our natural experience. This is why head-up displays remain for military aircraft but an iPad is wildly successful as a magic-put-picture-in-my-hand box. Books have, by and large, been within a factor of each other - and most of the things we manipulate easily end up about that same size. A desk from Roman times is the same size as a desk at Office Depot because our "workspace" is determined by our span more than anything else. Google Docs allows all sorts of realtime collaboration, and nobody really uses it. It isn't necessary. It's distracting, in fact. I probably link to this every other month or so. It's four years old and it's still en pointe. The fact of the matter is, nobody involved in these whizzbang UIs has much experience with biomechanics or human factors engineering... and the reason a Macbook Air looks a a lot like a streamlined Underwood isn't a lack of imagination, it's a lack of utility. A distinct lack of utility is a commonality amongst all of these whizzbang ideas.
So, here's what's weird about the link that you posted: it says pretty much exactly the opposite of what you're trying to say. Which frightens and confuses me, because I don't know what you're gonna do next. You could do anything. The whole point of that iRant (by the guy that designed some of the stuff we use today so frequently, so intuitively that we've forgotten that it once wasn't part of our "natural experience"- but that's another point, hold on) was that current designers are thinking too small with future UI... and that the good designer will draft something that more seamlessly melds with our natural experience. Did you read the part about the hands? I mean, that was like 75% of his thing, so yeah, you read the part about the hands. He wasn't saying "whizz bang needs to go away," he was saying, "more whizz, more bang." And relating it back to the gestural UI- what's a little weird about his post is that he starts out by slagging it for not being integrative enough, and then finishes by being like, "think about how many gestures you use as a human! We should be using more gestures!" Which, of course, is the crux of the gestural UI approach. To say that the designers involved in these UI's have no training in biomechanics is, and I'm putting this as delicately as I can, wrong. Now, normally, I wouldn't be this direct- especially in regards to an area in which I have only passing knowledge. But it just so happens that my very best friend, who I've known since I was a sprout and still speak to on a weekly basis, got his masters in design from a very, very good program and now works as a designer for a very, very good design company. Does that make me an expert? Aw, hail no. But I did talk through his thesis with him while he was working on it, and read the things he read for his thesis at his suggestion, and then read his thesis. Which was on, get this, biomechanics and their application to future design. More specifically, the human body as the most important mechanical component of theUI. And let me tell you, a lot of designers have thrown a lot of time for decades and decades into this idea. Small example: get your hands on NASA's Bioastronautics Databook. Hundreds of pages of design trying to get at the utmost intuitive way of interacting with mechanics. Pages and pages of widgets, toggles, windows, buttons... each one minutely designed to interface perfectly with the human body. And that was the sixties, man. You really think they discarded all that in the past few decades? As your blog friend says- nothing just happens, everything is built and developed upon. As for us never wanting anything integrated seamlessly into our natural experience- what do you think design is if not a pathway for intuitive interaction with, and manipulation of, our surroundings, our lives? And the best design is so seamless we take it for granted, barely notice it. Open your fridge, pull out the milk, pour it into a glass, drink it, rinse the glass, put it in the dishwasher. You've just used like a dozen items that were designed for the most fluid interaction- from the fridge to the knob on the sink faucet to the carton that the milk came in- and they've become so pervasive in your natural experience that you don't even get excited about them. One more thing which is neither here nor there, really, just a funny coda. Google Docs? I have to use that for both of my jobs, in totally disparate fields. And it's actually indispensable. Not the best design. But really very useful.
Your reading comprehension is low and your understanding of the situation is lower. I'm delighted that your friend is a designer with a thesis. I started to get a degree in design and then noticed that no real math was required. Then I transferred and got a degree in engineering with - wait for it - post-graduate studies in biomechanics and human factors engineering. The entire premise of Mr. Minority Report is "don't bother touching anything." The entire premise of Mr. Interface Under Glass sucks is "without touch what's the point." These are entirely separate positions, diametrically opposed, at complete odds to each other. The fact that you can read both and decide they're saying the same thing indicates that you missed the point entirely. Your Bioastronautics Databook, for example, was not created by designers. It was created by ENGINEERS. There's a reason NASA doesn't use touch screens. Ever tried touch-typing on an iPad? How'd that go for you? In fact, ever tried doing anything with a touchscreen when you aren't looking at it? THAT was the point of the article I linked. There's no feedback from touch screens of any kind and the primary way we interact with our hands is through feedback. So you say this: ...which indicates that you didn't understand what you read. And then you say this: Having linked to a video wall designed by the guy who brought us the UI from Minority Report. Look - this is a Motorola Startac. How long would it take you to figure out how to make a call? More importantly, could you make a call while, say, riding your bike? Or watching a sunset? Now compare: Granted - the Startac does one thing while the S4 does everything. But that one thing that the Startac does, it does hella better than the S4. That's the point. The argument I linked to is exactly this: interface design is being driven by available technology, rather than having design drive that technology. This is the fight that kept the Kindle off shelves for two years: Jeff Bezos wanted a cross between a Blackberry and a book. It needed to have a keyboard. The first design team refused to give him a keyboard because they couldn't make it look cool. And not a one of those cues is even vaguely a part of the design of the objects we use to interface with technology. Not. A One. THAT is the point. Now - read this sentence again: So. Given the subject under discussion, and given the broader point being made, do you think I was saying "nobody really uses" "Google docs?" Or was I saying "Nobody really uses" the "realtime collaboration" of Google Docs?Which, of course, is the crux of the gestural UI approach.
And the best design is so seamless we take it for granted, barely notice it
Open your fridge, pull out the milk, pour it into a glass, drink it, rinse the glass, put it in the dishwasher. You've just used like a dozen items that were designed for the most fluid interaction- from the fridge to the carton that the milk came in- and they've become so pervasive in your natural experience that you don't even get excited about them.
Google Docs allows all sorts of realtime collaboration, and nobody really uses it. It isn't necessary. It's distracting, in fact.
No, my reading comprehension is fine. Quite good, actually. For instance, I noticed right off the bat that you abandoned your design degree- at a bachelors level?- for engineering. Because design didn't utilize enough math (admission- my reading comprehension isn't good enough to parse what that has to do with the price of apples). And that's awesome, engineering is an impressive field, good job, buddy! But you abandoned design. So how can you confidently say that design doesn't make use of biomechanics? Especially when I'm telling you straight up that I know designers who have devoted their studies and careers to the application of biomechanics to design? And getting back to Mr. Victor's blog post and my understanding of the same- no, I get it, I get it, I get it. For the most part, he's talking about making more intuitive UI through the utilization of one of our most prevalent and delicate senses- touch. And the point that I didn't feel like I had to belabor before was: here's a link to a designer talking about how we might more successfully integrate UI into our everyday lives for a more fluid, natural and human experience. It's a neat link! Unfortunately, it's a link you supplied to buttress your original points: that there "has never been anything that we want integrated into our natural experience," and that "nobody involved in these... UI's has much experience with biomechanics." I mean, do you recognize the irony of making those assertions and then backing them up with a blog post from a designer talking about how to better use biomechanics to create a more intuitive, natural UI? Instead, I focused on the little tidbit at the end where he talked about how neat gestural UI could be (read the "one more step" section at the bottom... but of course, you already have, right?), and how we shouldn't limit ourselves to touch screens, but adapt UI to encompass the full range and expressiveness of human motion. I mentioned it because it's a point that actually works pretty well alongside my initial submission. But you want to focus on touch screens, and apply their clunk to my larger point about human/machine interaction. And that's fine, I see a peripheral relevance. Here's the thing. Designers have acknowledged that, and they're working on it. Read an article what, a couple years ago now, about a design for a touch screen that touches back, essentially addressing that concern about lack of responsiveness. Presumably, they're still working out the kinks. In the meantime, I'll keep having to type out my responses on the iPad, which, believe it or not, I'm doing right now and have done in almost all of my Hubski interactions for the better part of the last two years. Now, I want to make a couple more points re. your whole Samsung schpiel and your point about milk-drinking cues and lack of application to tech interface. But I'm gonna make them quicker and more off the cuff, because I'm getting bored. Sorry. 1) re. Samsung Galaxy- it's telling that you chose that as your example rather than the Galaxy's true precursor (as upheld in various legal battles)- the iPhone. In fact, in all three of your last posts, I've seen a studious avoidance of the iPhone as an example. I can only assume that's because the iPhone is so... goddamn... intuitive that you don't want to draw attention to it. You know what my three year old daughter couldn't figure out how to correctly use? The Startac. Oh, she could ape the movements she's seen from my phone interactions, but that's about it. You know what had my daughter surfing youtube within minutes of picking it up for the first time? You guessed it- iPhone. There's a lesson there (beyond a lesson in questionable parenting). Okay, so that. And skipping to 2)- all of those cues you and I talked about are part of design we use to interface with tech. Because a fridge is tech. As is a sink. As is the microwave, the oven, hell, even the milk carton. Not new tech, but tech all the same. Yes, I said it. A milk carton is a kind of user interface. For the effective transportation and bio processing of milk. But if you want to apply my examples more narrowly to digital interfaces- we're still approaching that. What about Nest? Household item become extension of user interface. And there are prototypes for smart refrigerators, smart cars, smart alarms, smart watches. Probably smart sinks. All of them designed to meld your everyday experience, that which is driven by natural, intuitive product interaction, with the way you store and process information digitally. So there's that. Listen, if you have a good point, make a good point. I don't care about flash, and your snarkier comments are boring. I read your posts time and time again for the smart stuff- despite the boorish rhetoric, not because of it. I interact with you time and time again because you have great points, and often something to teach. And that's why I'm responding now, because maybe you have something to add besides "you're wrong, and an idiot." But I'm not interested in a dick-waving competition. So. If you have a point beyond rhetorical jujitsu, have at it. If you're just going to respond with some classic KB zinger, don't bother. I don't care about your zingers, I care about good discussion.
I must've missed that every other month or so, great link! Only problem I have with it that he keeps his envisioned future quite vague - "Why aim for anything less than a dynamic medium that we can see, feel, and manipulate?" In part because it is hard to imagine, but the article was written like it was building up to a solution that is more than "HANDS, YO." That would be a great band name.Whizzbang Ideas
You could start by drilling up. The larger point is this: Our interfaces are driven almost entirely by available technology, rather than developing the technology necessary to create better interfaces. Classic example: If you're typing this on a laptop, there's a camera pointed at your forehead. This is because screens are opaque. So when you skype your friends, your relatives, your boss, your clients, they look at your forehead and you look at theirs. No big deal, right? A limitation of the technology, and we move on. Yet we're left with a nagging feeling that something wasn't quite right. As it turns out, eye contact is a big fucking deal. However, it's also an incredibly expensive and cumbersome problem. That didn't stop Apple from battling it. It's been five years, though, and we haven't seen anything... so it's one of those ideas that wasn't judged perfect enough to roll out. There are all sorts of patents like this: back when I was doing videoconferencing design, it was astonishing all the cool ideas Philips was coming up with. They were patenting whizz-bang ideas all over the place, doing crazy shit, solving problems and rockin' it like they were the PARC reincarnate. But then Philips decided that they would never make their money back in the zero-margin world of projectors, so they killed the division. And all those patents were left to wither on the vine. Here's a funny one: Philips solved 3D. Straight up. Fuck glasses. Fuck everything. Philips figured out how to run an image through processing behind a lenticular filter that gives you legit, parallax 3D. Works on TVs, works on devices, works on anything you shine light through. The best part is that the closer you are to the device, the more 3D you see - as you get further away, the image blends gracefully into 2D. Seamlessly. No glasses. No fuzz. A 3D TV off axis is a 2D TV, and the better your viewing conditions, the better you see it. Costs next to nothing. So who'd they license it to? Dolby. (Say what?) Right - so the company with the most to gain from stupid 3D glasses at giant front-projection movie theaters controls the patents for glasses-free 3D at home and on the go. And they've got one guy showing it off at about two or three tradeshows a year. 'cuz if they rolled it out for real, it would require a complete redesign of every movie theater on the planet. I only know of that one patent sitting at Apple doing nothing. I never worked there. I would imagine they're sitting on just as giant a treasure trove as Philips is. At the same time, Apple is the company that killed buttons for phones. They're the champions of the 1-button mouse. So I think it's safe to say they don't give the first fuck about ergonomics.
Very true. The problem with it is that we either need a) someone with that long-term vision and resources or b) an existing technology being used for a new interface and catching on. The former reminds me of Apple after they bought NeXT and the second seems improbable with today's glassy swipey touchey surfaces. Yeah, now that you mention it, I remember that. Philips basically made Eindhoven a city and they did some cool demos of that technology in that area, such as a nearby themepark (it's WOWvx tech, it seems to be the same you described although I don't know enough to tell).The larger point is this: Our interfaces are driven almost entirely by available technology, rather than developing the technology necessary to create better interfaces.
Here's a funny one: Philips solved 3D. Straight up. Fuck glasses. Fuck everything.
Are you a fan of Errol Morris? He invented a device he call the interrotron to solve this problem when he conducts interviews. Is a system that projects his face onto the lens of a camera, so that he can maintain eye contact with his interview subjects, while they're actually looking at the camera and not him. He feels that it gives a more intimate view of his subjects to the audience, a view which is normally lost in documentaries. Coincidentally, he used to make commercials for Apple, too; not sure if he ever prodded them on the topic of video chat, but it wouldn't surprise me.As it turns out, eye contact is a big fucking deal. However, it's also an incredibly expensive and cumbersome problem.
I've seen that. It's less of a problem in an interview situation because the conversation is necessarily 1-sided with augmentation. When we do those interviews we can keep nagging the subject to look at the camera; Morris's approach basically lets him misappropriate a teleprompter. We've been putting those on iPads through split screens since iPads came out.
If you click the link at the very top of the blog post, he mentions that his favorite designs can't be shown or talked about in any great detail for legal reasons. I'm assuming that's why he has to be so vague about it in the main post.
For sure. Is he talking about the Power Glove? Pretty sure he's talking about the Power Glove...