Supervising Some BTech students for their Capstone Projects (2012-13)

Posted on June 22, 2013

0


Author: Sanjay Goel,
http://in.linkedin.com/in/sgoel

_______________________________________

In my previous post, I had posted a very brief summary of the capstone projects completed in 2012-2013 by MTech students at JIIT under my supervision

In this post, I am giving a very brief summary of some of the BTech projects completed by my project students during   2012-13 at JIIT.

1.   Interactive Storytelling Environment for Children

By Arpit Agarwal and Hemika Narang: 

Arpit Agarwal Hemika NarangThis  project aims at development of a natural user interface which comprises set of tools for children to provide them with a fun and interactive storytelling experience. Our project aims at improvising the storytelling experience of children by putting into use the availibilty of current research in tangible user interfaces. Our solution is in the form of an installation where computing is present back-stage. We aim to design an experience for kids where they can have learning, ability to express their creativity and fun. The input is modulated with hands and physical icons which are present on the table. This would help children to explore in physical spaces connecting them to traditional like activities rather than modern computer interaction which is solely digital.

A demonstration video is available at  http://www.youtube.com/watch?v=FDR8Q5jMkUk

2.   Fiction Authoring Tool

By  Aahna Tomar and Vaidik Kapoor:

PosterThis  project is an application which comprises set of tools for book authors (specifically fiction authors) to assist them with collaboratively planning their stories, helping them with their cognitive processes and authoring the complete book. This tool will be a web service and will be mobile compatible. We observed that their aren’t enough tools to assist authors with writing and planning their books effectively. In addition to that, there are existing tools have no support for writing books collaboratively. Through this project, we have tried to explore the challenges in developing a tool for fiction authors who want to write individually and collaboratively.

 3.  Interactive Virtualization in Real Books

By Himanshu Mangtani and  Kaustubh Mukhopadhyay:

Our project aims at solving the problems which people (of every age group) face or may face while interacting with the real books. The problems are:

1. People are unable to club their research or related information(videos ,texts etc) with every page/picture of the book .

2. If something is written in some other language or you want to translate it to your native language , then this is not possible with real books.

3. There are no instant dictionaries, one has to open the dictionary or type the word in your mobile phones to get the meaning.

4. Important keywords can’t be saved and managed. Eg if somebody came across a new english word and he wants to save it for future reference.

5. Last and most important giving presentations (eg ppt’s or videos) using real books, presently doesn’t exist.

For solving these problems, we have used the concept of virtual reality. Using the computer vision algorithms and android mobile phones, we have demosntrated how above listed problems can be solved. We are presenting a prototype of virtual books, using an Android phone.

A demonstration video is available at  http://www.youtube.com/watch?v=6vpeujIKRnY

4.  Water Simulation and Effects

By Rahul Shukla

RahulThe project aims at simulating visually realistic water in a 3D environment using a particle based system. Since the model to be simulated requires heavy computations, CPU cannot be relied upon completely and hence to achieve good efficiency in FPS both CPU and GPU computational powers must be exploited to the fullest.

Currently, the model developed is one that consists of 500 particles which fall freely under gravity in a 3D environment. Upon coming in contact with a cubic container, the particles spread around in the container and finally drain out through a small opening at the bottom of the container. Lighting computations achieved with cube mapping using GLSL.

The API used is OpenGL, collisions are currently controlled by collision detection algorithms of Bullet Physics, the scene is rendered using GLSL ( Graphic Library Shader Language ) which utilizes the Shader units of the GPU for rendering primitives and the free surface of fluid is constructed using the Marching Cubes algorithm.

 5.   Human Action Recognition

By Poojit Sharma and Udit Gupta

PoojitUditDesign a real-time system for sports video analysts, to automate accurate video tagging of various shots in a table tennis game by recognizing human actions from RGB and depth Images. The actions of the player shall be categorized into classes like front hand shot, back hand shot, right hand shot, left hand shot, serves and idle. The data thus gathered can be indexed for later analysis of a player based on the kinds of shots he/she plays.  In order to capture RGB, depth and skeleton data, we are using the Microsoft Kinect™ camera. We are using Microsoft Kinect™ SDK on Windows 7.  In order to achieve real time performances, we need to use the C++ NUI API of the SDK, which is faster than the managed C# .NET API.    To analyse the colour stream, depth stream and the skeleton data, we are using the OpenCV library. OpenCV is a library of programming functions mainly aimed at real-time computer vision. It is free for use under the open source BSD license. In order to learn the data, we are using LIBSVM, which is an integrated software for support vector classification. It also supports multi-class classification.

The first step towards the solution is data collection. We have gathered depth, RGB and skeleton data of the player playing table tennis while playing different shots like serve, right hand shot and standing idle. This data consist of 5 different players. We have broken the video into segments manually by labelling them with the kind of shots being played as we are performing supervised learning.  Next, we pre-process the recorded data, reducing noise and filling in the empty skeleton data. Noise Reduction plays a very important part while extracting features as extra unwanted information might ruin the accuracy of the system. One video file is one example with its frames as features. For each frame we calculate the average difference image of the depth frame and use the skeleton joint data. After generating the feature vectors, we train the data using Support Vector Machines. Similar approach is followed for classifying the video segments.

Advertisements
Posted in: Uncategorized