Call for paper NOW open

I managed to publish (I’m still recaping from the flu that hit me this week) the call for papers for the 15 mins short talk slots at The Interaction Frontiers 2006.

If you’re a bright-minded soul that has something to say about

  • Softwares able to understand users’ moods
  • Adaptive interfaces
  • Ubiquitous computing
  • Robots and automas
  • Natural interfaces
  • Augmented interfaces

Send – before May 25 – an email containing your position paper in PDF (no more than 4 pages) to interactionfrontiers@gmail.com; the four best are going to speak about their project against TIF06 public.

Ubiquities

Today we officially started a Flickr Group to showcase Simone‘s technique to generate ubiquitous presence of the same person in a single shot, we called it Ubiquities.

Three mes at work

The shooting tecnique comes directly from the Transparent Screen technique, but it’s applied over and over to obtain multiple occurencies of the same subject in different positions. As Simone says it’s easy to be applied indoor, since nothing in the background changes. We should experiment something outdoor.

It was real fun posing for a multiple me-while-working shot and to effectively pose for a multi-multiperson shot with both Paolo and Simone: we used post-its on the floor to map the feet positions and test lights and shadows. I’ll try the same technique at home to portray baby Francesca.

If interested you’ll find a brief tutorial here, and the Group photo pool here. The group is public, join in!

Monday morning at the IDII

As previously anticipated I spent a nice morning at the Interaction Design Institute Ivrea in Milan listening to a couple of thesis project (which are now approaching 50% of their completion). I was invited to listen to Vinay Venkatraman presentation: a bright minded Indian guy who came out with this nifty prototype of a new way to interact with web content for visually impairedusers.

The main idea is that actual screen readers (Flash Voice exluded, I dare to say :-) are specifically linear (e.g. they scan the page top to bottom and have the transformed in interactable synthetized voice. full stop.) while we (and with we I mean someone who can see) usually interact with web content in non-linear ways. So Vinay came out with a solution which translates web page elements into different sounds: a TNICK for a form, a PLICK for a paragraph, a TRSTCH for a link and so on; everything is controlled via a motion-feedback enabled roller which is USB connected to the computer and manipulated by the user.

His prototype (which is made for nearly 70% of wood) targets developing countries thus trying to be EXTREMELY cheap to be built and based on open source software and ready-to-build hardware kits. Vinay is probably taking into account that the 100$ laptop is really becoming a smashing hit in the next few years (after the speech I suggested him to take into account the 20$ cellphone too). I tried to test the prototype but – thankyou Murphy – all the app crashed and didn’t re-started; I’m looking forward to retest it soon.

My visit ended with a quick chat with Fabio and a Pizza with JC /thankyou for spreading the word on Flash Voice!), Phil Tabor and Neil Churcher, with whom I had an enlighting discussion on the future of mobile television.