Tuesday, March 27, 2007

Pad++ and the zooming user interface with adaptive rendering!

I've just got the adaptive code working for the amazon interface. I was looking around and low and behold the other people who have done time centric rendinerging where the people doing the original zooming interface ( Ben Bederson, Jon Meyer ) .

Pad Papers

I think what I have produced is novel enough that I can do a paper about it - this in in Java there stuff was C plus plus /C/TCL. I wonder what happend to the Pad plus pluse interface. zooming user interface appears to be such a natural thing.

Software Development Kit for Nokia RFID application developers now available

Nokia - Software Development Kit for Nokia RFID application developers now available - Press Releases - Press - About Nokia

Perhaps I am just playing catch up but it looks like you could wave your phone near an RFID tag and have it 'read' the tag and so do something with it.

cool.

Wednesday, March 21, 2007

Pictures from the amazon table



this is a picture of the amazon table -on of the multi touch table projects I am working on. By putting you fingers on the table and pullling them appart it zooms in to the Amazon landscape. By pulling your fingers together you zoom out. IF you keep zooming in you eventually get to see thebooks. This appears to be quite natural.





this shows the books section - on the web version clicking on the books goes to Amazon.

Monday, March 19, 2007

perhipheral vision.

I have been wondering if it was possible to use a seperate perhipheral vesion system as a way of creating a deeper sense of immersion in a system.I mean in a desk top or or laptop.

Use for spatial audio idear

I was talking to Paul about the ipod shuffle and he mentioned that he thought that it was hard to have a screen less interface. I quite agree with him and said I wondered why. I have always been intreasted in try to produce an audio interface but would like to get beyond the phone interface of ‘press one for action a press 2 for action b. wait for a long slow list to get to the option you want’. With icons, location and text you can be reminded and skip to the option you want pretty quickly. The subject moved on to Simon’s excellent audio interfance and we came up with the idea of multiple speakers all talking at once.

Imagine you have a pair of head phones, you have a key pad with say five buttons and the machine wants you make a choice from. The idear is that all five voices sit speaking out aloud. Each voice gets its own speech position in space and you use your listening in party auditory system to identify the one you want. So imagine one voice repeating ‘option one to revise your document. option one to revise your document.’
Another voice repeating
‘Option two to create a new document. Option two to create a new document….’
And so on for all five voices.

The spatial audio (position in stereo field) would help which of the appropriate buttons to click/press. The distance to speaker plus variation in speakers (male, female, old, young, accents) would act both as further ways to separate out the voices (perhaps).
Clearly there must be some natural limit to the number of voices you can differentiate but perhaps 4 five would be useful. Worth testing out.

Friday, March 16, 2007

Multiple mice part 2

http://icculus.org/manymouse/

The other thing I have always wanted to build was a 3d editor with multiple mice. So to make a cube you pulled the mice appart each mouse would be the approrparate corner. This would require scroll wheel mice to be the part.

Actions like cutting would become very simple - you click on the think you want to cut and use the other mouse to do the cutting. You could also use one mouse to move the viewpoint while useing the first mouse to hold/move the item in question.

Multiple mice (part1)

I have been checking out.

http://icculus.org/manymouse/

or

http://jusb.sourceforge.net/?selected=about if you have a Linux box. This allows multiple keybords to.

I always liked the notion of the open meeting tool - a way where groups could have conversations but not have to take turns to speak. There would be a large screen on the wall and the chairman would ask for opinions like should we do this or do that. People could speak but others could annotate with their typed opinions - these could be ether named or anonymous. Could be done if you could collect the input from many keyboards.

I've sat in meetings where something has come up and some senior person says 'that reminds me of the the time...' (ok case of pot calling the kettle black here but bear with me). This tends to move the conversation along making it impossible for people to go back and readdress the question.
Meanwhile some underling has important information 'I know this is possible this has been done already by our competitors' or has some alternative suggestion but is too nice but to speak up.

more random thoughts next week.

sheep

Monday, March 12, 2007

fun with GML Diamond touch driver for the mac.

On Friday I ended up spending most of the day working with the GML mac driver.
http://iihm.imag.fr/projects/gml/
The driver let me write a mac version of the PC merl program ( kind of a Diamond Touch debugger). Looking at the output we began to wonder if it was possible to pick up 2 or 3 individual fingers. With this you could draw a line between two fingers or use it to roate a knob or shape.

Anneli was determined to figure it out and kept me thinking about it while I should have gone off and got some admin work done.

I will test the Mac Merl program for a bit and possibly release it ( I need to contact the GML people to see if I can relase their library with it).