Archives for category: Interface Design

Problem: You’re working on a large, complicated machine made of multiple parts and functions. Something goes wrong. The machine shuts down, or it starts shaking, or a part starts making a whirring noise — something you identify as out of the ordinary. You have a manual on location, as well as an online version. You start searching for what may be wrong, but the manual isn’t labeled effectively or is not comprehensively indexed.

Solution: Barcodes are placed on each segment of the machine. Using a scanner device with access to an online manual, you scan the problem area, and the device auto-searches and locates the appropriate section of the manual.

Rough representations of a human and a tablet device.

Presenting... a human being with a tablet device (with scanning functionality and an onboard manual, or access to an online manual).

Human is confused at a complex-looking machine.

The user encounters a confusing machine, or the machine is acting in a confusing manner. ("Help!")

Human using a tablet device to scan a barcode on a machine.

The user scans a barcode (here, a QR code) physically applied to or near the problematic part of the machine.

Example machine manual page on a tablet device.

The tablet device goes to the appropriate section of the manual, based on that scanned barcode.

That’s it!

And if that part or section of that machine is used in other machines, similar barcodes may be printed on those machines. Also, sections of the manual may be updated without needing to re-apply barcodes.

The overriding goal or objective here is this: Users do not need to guess where to look in the manual to find help, and an onsite manual is not needed. However, the company would need to create an electronic version of the manual. That’s the hitch.

Problem: Imagine you’re using your app of choice, or you’re sifting through bookmarks folders in your browser of choice, or you’re working your way through the file menu or dock of your OS of choice. You’ve gone one folder in (“Bookmarks”), two folders in (“General Bookmarks”), three folders (“Fun”), four (“Movies”), five (“Comedy”), and you’re just about to click on a link to your favorite funny movie (“Jingle All The Way”)… then your mouse slips, left, down, wherever.

You just lost your filepath. It’s not a big deal, but if you use a computer like I do, you navigate filepaths dozens of times daily — more, if you’re an organization freak.


If I accidentally slide left...

The example menu path is diminished as the user slides left.

Dang it! Now I have to go back through and find my menu selection again.

Solution: One click locks a step in the filepath, a unit of hierarchy. An icon shows that this step or unit is actually locked, and there is a visual indicator of some sort that it is unlockable at any moment.

Something like this:

Locked menu demonstration - menu not quite full.

The user has just started navigating through the menu...

Demonstrating a locked menu - expanded.

...and the user desires the menu should stay put while he/she decides which option to select.

Demonstrating the locked menu - locking the menu.

So, the user clicks on the open lock icon...

Demonstrating the locked menu - menu locked.

...and the menu is locked at that position. The user may click the closed lock icon to undo this.

And that’s it! Just a simple icon change and some undoubtedly complex coding to make navigating menu paths somewhat easier.

I really feel like there could be a better way of indicating locked and unlocked menus, but the point I’m trying to make here is that for complicated paths, it would be helpful to prevent one’s own slip-ups with the mouse or trackpad by locking a section of the path.

It’s been awhile since my last ideation!

Okay, this one’s pretty simple. The idea is: add collaborative editing to design applications, in real-time. Like collaborative whiteboards, but with the integrated tools, note-making, and granular functions.

20110718-104624.jpg

In the image, the web app user is adding an ‘X’, the iPhone user is adding a note, and the iPad user is scribbling away. They all do this at the same time, or one after another.

In a real-life example, a user experience designer, a visual designer, and a front-end developer could all be working on a single interface for a brand-new iPad game that has a complex logical structure. They are currently trying to understand basic interactions from one screen to another but don’t have time to have the UX designer cook up several wireframe iterations, then have the visual and front-end designers take those frames and make a basic prototype. They have 10 hours left before development. Oh, and one lives in New York, the second in Minneapolis, and the third in Houston. In-person is not an option.

So they all use their favorite basic visual tool, i.e. OmniGraffle, Photoshop, Fireworks, and work from one screen to another, visually describing the interactions with boxes, arrows, text, gradients, shadows, and most importantly, immediate verbal communication and response.

Visual might ask, “Hey, how will the user add more players to the game?” and UX would add an Add Player function to the screen (maybe a button with a modal prompt). Then Dev would say, “iOS doesn’t support that sort of modal, but you could do this” and Dev redraws the button with an text box alert instead. Then Visual says, “That works, but it doesn’t conform to the game’s proposed visual style. Maybe this?” and so on and so forth.

Like?