I've spent most of today getting the cloud back into running order and showing some natural laguage people round. While doing that I've been thinking about some kind of future interface. I met up with T. and told him the interface is like a word on the tip of your tongue. Nearly there but you can't say it.
How did it start - talking about putting an interface on the Surface where if you put your IPhone on it the Iphone interface would 'spill its guts' on to the Surface letting you have more interface time and space. Like a full sized if virtual keyboard.
I wondered if you placed two Phones each running photos or calendars what would happen?
What would happen if you placed your phone next to your iMac the machines would notice each other and communicate ( T mentioned the next generation iPhone would have an RFID tag) . How would your Laptop then expand the interface ?
So how would the interface work ? I like the idea of an intelligent layout manager and interface definition which would let you expand the iPhone/Android sized app on to the bigger screens. It might be simpler to store multiple layouts too ( as Paul suggests). I like the idea of having resolution in the layout description allowing you to specify what should go when the interface is reduced.
I'm thinking we need a whole new GUI/OS something that would support the notion of 'user', mutli-treading in the rendering layer, a cloud based model, seeing things like the Microsoft 2020 vision I'm thinking it would be good to have animation based at the lowest level.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment