Interview of Dr. Kiran K Ravilakollu, Assistant Proffesor, Department of Computer Science, School of Engineering and Technology, Sharda University

Please tell in brief something about yourself?

A young, dynamic and confident person that believe in the credo “if you can dream something big and confident enough to achieve it, your motivation will drive towards the target”. My motivation helped to achieve new heights of knowledge along with a professional and structured personal life, with style and progress, with head always held high.

Tell us something about center for hybrid intelligent systems?

‘Center for Hybrid Intelligent Systems’ is a research group investigating on vision, navigation and emotion based aspects in hybrid systems, established in University of Sunderland, UK. Prof. Stefan Wermter is the key person behind this center. As a research fellow in this group, I have made my contribution in the area of Neural Robotics specifically multimodal integration based on the Superior Colliculus.

Hybrid intelligence

Hybrid Intelligence Systems

On what projects you are working?

1.         Ambient Intelligence for smart acoustic localization and control

2.         An edited book on “Ambient Intelligence: Role of Computational Intelligence” with Springer.

3.         Sensory network for waste bin analysis and routing for collection and disposal

4.         Search engine optimization using novel ranking methodologies

5.         Investigation of service optimization on cloud computing environment (near to completion)

With which international universities you are working?

My work is in association with two of my colleagues located at places like Imperial College, London, UK and Machine Intelligence Labs, USA. Also, Prof. Kevin Burn from University of Sunderland, UK in robotics design and development.

According to you which university is best for the research in your field?

These are the thrust areas of research concentrating on stabilizing modern concepts for upcoming issues and needs of tomorrow. It is too bad that, I am not able to quote any IIT’s. Fluid media labs of MIT (http://fluid.media.mit.edu), is doing excellent research in Ambient Intelligence domain. As far as autonomous agents and robotics is concerned, Knowledge Technology Group in University of Hamburg, Germany under the leadership of Prof. Stefan Wermter is doing an excellent work along with AI labs of MIT, USA. There is much more excellent research and application development is carried out across the world.

 What is Artificial Intelligence?

“External influence provided to any computational device/machine, through which the given task can be completed efficiently at a level greater than or equal to calibre of a natural intelligence” is considered as artificial intelligence. This influence can be provided in the form of methods, analysis, learning, training and decision making aspects. “Any concept that can nourish, design and develop, a methodology through which a computational device can be constructed that can act in closet possible connection with a fellow human being” can be called as artificially intelligent agent (robot).

What is Saccade?

At an instant of time, human eye can only isolate/see specific point in a visual environment. During navigation across the visual frame, as per need, concentration of visual focus will drift from point to point. This movement of drift eye from a point to point is called saccade.

What is Multimodal Integration?

Input stimuli through which sound, sight, smell, touch and taste/movement can be felt by a human being are considered as various modals. These are also called as primary sensors or “Panche- indriya” in Sanskrit. When it comes to processing of stimuli in human brain in daily life, many cases arises where more than single modality has to be considered to make an efficient decision. In such instances, due to the reasons like too many stimuli arrival at single instant of time/inadequate information/incomplete stimuli/absence of expected stimuli human brain takes assistance from associated/available modality to make efficient decision. When we examine what is happening inside, information from multiple modalities is integrated to make a final decision call, thus called multimodal integration or multisensory integration.

What is Superior Colliculus and what is its importance?

Superior Colliculus (SC) is an important region of mid-brain located in human brain system. This region is in direct connection with eyes through optic nerve and tract. With this connect, SC is able to control the movement of eyes and thereby movement of head. In general the SC is responsible for making the decision of what or where exactly to see, with the help of eye and head movements.

How Multimodal Integration in Superior Colliculus guides you to develop robotics Model?

Multimodal Integration carried out in the Superior Colliculus for audio and visual stimuli integration only, through which decision of target localization can be effectively made. Understanding behaviour of the SC, can only help in computational modelling of multimodal criterion. This criterion is again a part and parcel of an intelligent agent that is intended to behave like human being. This experiment with SC can be an example through which certain critical behaviours can be studied and transformed into computational model that can be replicated on to robots. For success of such computational models it is very essential to congregate biological, neuroscience, computational, artificial intelligence and robotics areas of science and technology.

What is enhancement and depression?

Enhancement and depression phenomena are introduced in modelling behaviour of multimodal integrated output for localization of audio and visual stimuli source. Enhancement deals about increase in strength of integrated stimuli beyond any of the given audio or visual stimuli (however, with the increase in influence of one on other) for confirming the localization point. Similarly depression deals about depressing the strength of integrated stimuli beyond any of given stimuli (however, with the decrease in influence of one on other) exponentially. These two phenomena are critical in characterization of multimodal integrated output. Berry E. Stein and M Alex Meredith verify these phenomena from neuroscience point of view.

How do you apply auditory and visual processing in artificial intelligence?

Intelligence can be defined as observing the environment and then acting on it. When it comes to interaction with environment, eyes and ears are the primary sensors that are used for obtaining information. When it comes to robots, their means of interaction is camera and microphones. In this relative context, in order to understand environment and to act, it is very essential to study and apply auditory and visual processing methods to make a machine/robot artificially intelligent.

Explain the merging of senses and cognitive neuroscience?

Merging of senses deals with various stimuli data, which are received at respective sensors, are integrated for generating a conclusive command decision. This merging can include auditory, visionary, tactile, smell and taste along with contained-gestures. Merging of senses means integration of sensory data. This concept in computational domain is applied to all possible sensors that are available for measuring different parameters. Cognitive means sensory information processing including usage, fitting, modelling, transformation, abstraction and encapsulation with valid analytical reasoning. Cognitive neuroscience deals with study of neuroscience or brain science aspects and thereby transformation of those principles with the help of cognitive models into real world. There is fine line between these two, “cognitive neuroscience should be able to justify the methodology from biological/neuroscience point of view” while it is not mandatory for other.

Explain “it’s a game of signals and their timings”.

Starting from the Superior Colliculus till multimodal integration, this research domain has to work with a large number of stimuli or input signals. Depending on the functionality or application, different types of methods or techniques are selected to work with the signals. Similarly depending on the constraint and influence of time on the signal, time factor is considered for defining the specifications around the given signal. Based on these two this research is all about “game of signals and their timings”.