Informatik Handwerk
Peter Fargaš
Programmer :: Prototyping, Research
PHP | JavaScript | Java
Informatik Handwerk
Peter Fargaš
Programmer :: Prototyping, Research
PHP | JavaScript | Java
Informatik Handwerk
Peter Fargaš | Programmer :: Prototyping, Research | PHP,JavaScript,Java
Release date: August 2011
Link to authoritative version

Interactive User Experiences

This document aims to be the first introductory text on creating interactive user experiences and presents some of the minimal requirements from the viewpoint of system design.


As in most of my other works and since I almost always work without inspiration from outside, I do not back up my statements with citations or include links to other resources. I find both rather unnecessary due to easily understandable nature of my reasoning and due to the easiness of finding such resources via search engines by deducing proper keywords from the text. For some of my reasoning, I found affirmations in the book 'Mapping Scientific Frontiers' by Chaomei Chen, to be more exact- in chapter 1, sub-chapter 'Message in a Bottle'. I took some inspiration for clearer formulation of my thoughts from there.

Field pinpointing

Because of the broadness of the topic, I decided to limit the reasoning and explanations to the field of 'traditional' user interfaces running on personal computers. Others, still from the user interface category, would be mobile computing consisting of 2 main areas- phone and tablet devices which are beginning to intersect; and specialized fields. Due to the different (e.g. hardware input) methods, the concepts presented in this document are maybe not quite straightforwardly, though should be carry-able over.

Another field are online user experiences and I will touch them only briefly without going into depth since they posses special problematic of their own.

The way of reading can be twofold- the usual one of user interface being an unification layer over all applications but reading from the viewpoint of an application specific user interface can be of relevance for you.


Functionality of software is limited by various factors, some of which are:

  • capabilities of hardware platform designed or capable to run on
  • algorithms, protocols and generally the design of the software
  • the user interface, or better said the communication on inputs, commands, (partial) results, events and other turning points arising in and needed by the process through the hardware input and output devices

This document will concentrate on the latter limiter, which is the first key in discovering and enabling new functionality possibilities as well as in optimizing our overall usage. The pressure to cope with increasing data amounts, with the body strain introduced by requirement of precise and often very repetitive micro movements is a hard challenge ranging from transformation of raw data by means of visualization methods into the most optimal human understandable form, editing of free parameters and real-time preview of the results, communication with other co-workers and departments, up to design of input and output hardware devices used. This all and more is of what user interfaces consist of.

Current state of field

From the conceptual side, visual user interfaces have evolved very little since the introduction of 'windowed' graphical user interfaces in the 80's. Experimental interfaces using physical models are beginning to be developed but none of the concepts has reached maturity, exhibits seamless adoption by broad spectrum of users or has shown such a significant productivity increase, that it would be adopted by the public sector software developers- sticky windows being an exception. As well, it is as if the set of elements of which user interfaces consist have been discovered in fullness right from the start, with 'tabs' as one of the rare newer elements. The breakthrough to 3-dimensional models has not yet been achieved and there seem to be far too little reasons for doing so- the 2-dimensionability of most our work and the apparent increased requirements on cognitive capabilities due to the non-intuitivity of such space- we seem to work better with 'I know the way' than 'I know where'. Presentations of extremely specialized - in hardware, software, as well as in the domain of usage - user interfaces have been made and although seldom mentioning the design caveats present, have shown some of the potentially reachable frontiers of our interaction with digital space.

There is not much to write about input devices- keyboard and mouse. The mouse is slowly acquiring new functionality, but not even the mouse wheel can be said to be always present. Multimedia keyboards are as well not widespread and practically the only new function key almost always present, the 'windows key', has not brought much. Tablet is very seldom and it's strengths are significant only for specialized tasks, while the typical user interface is harder to control with this device. Experimental setups of those classical devices as well as experimental devices are being developed and sold but seldom reach the versatility and ergonomics presenting on the top a social barrier.

The user interfaces are due to our nature visually oriented and their tuning for (up to conversion into audial space) visually impaired is in my opinion problematic although I have practically no information about such.

Proposal for the direction of field development

The following are some of the key points, only roughly ordered- some could be seen as under-categories of others, while others may augment a whole bunch of them.

Not only supporting the whole spectrum of users

One of the main points is not only to cover the whole broad range of users with very different capabilities in using a computer by supporting the minimum, but to give to the more advanced users functionality and more effective usage. The introduction of keyboard shortcuts as well as customizable and dock-able elements are only one of such and this direction should be further supported. Introducing extensions, possibly isolated from accidental triggering to prevent confusion of simpler users and maximizing the number and variety of levels of interaction is in my opinion the right way to go.

Keeping the way forward free

Fast access to functionality of working with the environment itself might be a crucial turning point for new, effective usages. When introducing new hardware input methods, there is the urge to find all possible ways of usage and assign natural standard functionality to them, the more the better to make the technology attractive. This is however very shortsighted and at least one or two of such should be marked with 'unstable functionality intended for future purposes'. Otherwise, the users which are not flexible in usage and the rolling wagon of software relying on this standard might block extensions of the possible interfaces coupled with the hardware for good, never allowing the device to fully develop its potential. Alternatively, ways of gracefully escaping such should be considered beforehand. We are currently very limited in the spectrum of input devices and this should be kept in mind.

Multiple user interfaces

Different applications might be either more effective or even require a very different user interfaces. An example from the more common side, architects, designers and generally people creating and editing visual content make use of the tablet hardware- automatic embedding of the application in mouse paradigm forces the users into ineffective usage, by the need of switching devices or by ineffective navigation. There is no reason why the visual space could not carry multiple user interface engines running under an 'user interface operating system'. Sooner or later, decoupling from the underlying operating system will become necessary and postponing this decision stalls the development of both, since they come mostly in bundle. Online services where server offers functionality and client side carries the visual environment are an example of this paradigm which should be followed. The application level needs to mimic this as well, but programming teams in companies have a comparable split in team member responsibilities already often present.

User interface programming languages

We are missing them. In the standalone application development sector, the currently available 'visual programming tools' are very basic. Positioning of elements, filling them with data in case of underlying the whole with standardized data structures and binding with input devices commands is mostly their capabilities limit. The dynamics of the environment, has to be largely programmed by hand.

In the online environment, apart from the extremely burdensome working with the most elemental 'cogs', the HTML/CSS/JavaScript programming stack has quite some other problems. Starting from the concept itself, it doesn't present a clear split between content and presentation. Some problems from the practical side: the JavaScript does not scale anymore to the tasks. Browser dependent implementations are necessary but there are no ways to do it 'clean'. In my opinion, being a web programmer is having the worst programming job ever. Every web application company is developing their own framework, all solving the very same problem, over and over again.

Categorization of elements and their representations, of possible user inputs and micro-commands, of processes coupled with virtual environments, of structures arising and of transformations of all previously mentioned, from one user interface paradigm into another is needed. A general purpose, space- and representation-dynamics embedding 'language' (or tool) is needed.

Users, designers, programmers

Programmers are very advanced types of users and the cliche goes that they not only have very hard to understand though extremely optimized tastes, but as well problems in estimating the tastes of others. Designers on the other hand, are being trained to have a very good grasp of perception and needs of others. The design should be only loosely coupled to the application's functionality, modifiable after the application is deployed in various scenarios for the optimal effect. The before-mentioned docking is an example of this paradigm. However, each application solves this problem on its own and visual, non-programmer tools are missing. As well, the 'design settings' should be easily separable from the application's functionality settings for sharing among users of similar preferences, backgrounds. Nevertheless, no single design fits all and ideally, users should be given possibilities to tweak the visual side of an application to their style of interaction. Tools of various expressive powers would be much needed for this area of functionality.

The social aspect

Different environments users might be using, present a barrier in communication as well as complicate the case of 'helping a colleague'. The environments should be structured maximally in a 'tree'- with root being the most basic usage&representation and branches carrying the more advanced ones. Keeping root active and even better, falling back to basics by very simple means, should be available at all times. Since the more advanced users already know all usages from the root up to their level, communication and teaching channels stay open. The sharing of settings for parts of environments as well as for single applications would allow users to use pre-optimized environments, revert accidental changes and allow corporate standards.

Waiting for approval.