PERCEPTUAL COMPUTING
We’ve all seen at least one scene of the Iron Man series; we’ve all seen Tony Stark Manipulate holographic projections in detail and richness we can only dream of. We have seen him interact with computers using said images. The ease with which he simulates the construction of a battle-suit, of buildings, of various processes…it is all a treat to watch and marvel at.
But our minds automatically dismiss it as fiction. As something not achievable in this generation or by the next five. We think of it as something well beyond our current capabilities in technology.
But is it, really?
As things stand we may be well on our way to seeing something like this become reality. It is made possible by a technology…no, a concept. It is a concept that has caught on in recent years and is being turned increasingly into a technology that we can actually see and use. It is the concept of Perceptual Computing.
The words may sound fancy…I mean “Perceptual Computing”! But really, the concept itself is simple even if its implementation is far from being so. The words themselves expose the meaning! Computation, as we know is the accomplishment of a single task by the use of computers.
What does perception have to do with it, though? We need to first consider how we actually use the computers that so dominate our lives and society.
We do so primarily by keyboards, touch interfaces and other such devices. And yes, these devices have served our purposes well and will continue to do so for many a decade. But we humans…it is in our soul description to push ourselves and exceed our past accomplishments in all possible ways.
So this is what we did: We increased enormously the capabilities of our processors, our memories and our data transfer infrastructure. All this certainly helped as we see the massive effect our computers have had…but it also raised in the minds of our developers another question.
What do people do with all this computing power at their fingertips? With all this technology?
Indeed, it seemed like the computational resources in our hands was being under-utilized! Thus we came up with a new way to interface with said computers…using our very perceptions.
Let us be a little clearer: we perceive the world by our stimulus to it, and its reactions to us. The same holds for Perceptual Computing.
We interact with computing applications with gestures, eye movements maybe one day even our brain signals. Like dear old Tony, we have the capability to…well not move around holographic projections as of yet…but we can certainly manipulate everyday applications with our gestures.
Devices will be able to perceive our actions through new capabilities including close-range hand gestures, finger articulation, speech recognition, face tracking, augmented reality experiences, and more.
We could wax eloquent about the wonders of this new computing method…but nothing describes a new technology like its architecture. So…here we go.
‘ARCHITECTURE’
Yes, I pulled this from Wikipedia. But before you shout ‘MORON!!”…this is actually the best place to start technically.
This looks like just the kind of nonsense nerds try to pull on us. But do not despair…for the situation is not lost. Let us try to build an understandable picture out of this ‘architecture of the Perceptual Computer”
We have an application A, which enables us to do stuff on a computer. Anything. This application has a list of things that can be done on it, and can’t be done on it.
We basically group the ‘can-do’ things into the ‘vocabulary’ of the application. It is basically a list description of all possible actions in that application.
Okay, we have Vocab. But for each Vocab, we next collect use-cases. We aggregate use-data under each ‘word’ in the Vocabulary…this is basically a record of how each action is implemented in the system.
Skipping over a lot of jargon, let us simply say that these words and actions are basically compiled into a codebook for the application.
So, what we have here is an encoder that assigns a word in the vocabulary of application A to an action or sequence of actions. Once this is done, a Computing with Words(CWW) engine is activated and its output is computed for the computing system and decoded for it.
The decoding involves partitioning into fuzzy sets and before that, constraining the CWW output so it resembles our Vocabulary.
-----------------------------------------------------------
Today, people are doing wonderful things with this concept. Humans are achieving a level of interaction with Computers that were previously undreamed of.
Intel is doing pioneering work on this technology, and has even release the Intel Perceptual Computing SDK. It is accessible to common people, and the world has at its fingertips a beautiful tool. If we want it to be, we can eliminate the need for keyboards with advanced forms of Human Computer interaction.
So, fellow developers…let us involve ourselves in this exciting technology. Let us be part of something that can breach boundaries of literacy and education.
Let us compute perceptually.
Article by,
Chetan.S.Kumar,
Bangalore.