Supervising Some Senior BTech students for their Capstone Projects (2011-12)

Posted on June 21, 2012

2


Author:  Sanjay Goel, http://in.linkedin.com/in/sgoel

_______________________________________

Since the beginning of my teaching career in 1988, I have always given a lot of emphasis on project work.  Wherever I got an opportunity, I initiated and supported the efforts to increase the share of the project work in courses, curriculum, and assessment schemes. In JIIT’s BTech curriculum , we have created many slots for project work.  Out of a total requirement of 195 credits for the BTech degree,  30 credits are reserved for major and minor projects in 4th  and 3rd year. Further, in the CSE department at JIIT,   students are normally expected to do a group project in almost all CSE/IT courses.  For example, the previous post, My Courses – V: An Overview of a Course on Image Processing (Jan-May 2012),  also includes a brief  overview of some projects completed by the students in my course on Image processing.

Given the opportunity, some motivation,  and little guidance, it is  a great pleasure to see how some youngsters can dare to work on unconventional problems,  aim high, and work hard  not only to meet but also to exceed the expectations.  However, it is unfortunate that there are very limited appropriate job opportunities in Indian IT industry to properly leverage the real talent of  such fresh engineers after their graduation. 

I feel that the most important role of the faculty supervisor is in motivating the students in setting high goals  and helping the students to discover & define a project problem that matches  his/her tastes/hobbies/aspirations.   I  avoid  to ask  students to work on readymade problems.  

In this post I am giving a very brief summary of some of the BTech projects completed by my project students during year 2011-12 at JIIT.  Interestingly some of them had so far showed mediocre or even poor performance in traditional forms of academic engagements and examinations.  

1.   Prototype Vehicle for Physically Challenged

      by Abhishek singh and Siddhant Singh:    The project is to build a prototype vehicle for physically challenged individuals or more specifically wheelchair bound individuals for whom commuting is no less than a challenge and far more challenging is getting on or off the vehicle. This project aims to reduce this problem by designing and building a vehicle that incorporates the wheelchair within the vehicle, thus allowing the wheelchair bound individual to never get off his wheelchair. Existing systems that allow for wheel chair to be kept in vehicle either require the driver to get off the wheelchair or requires the individual to be a passenger. The vehicle shall allow the user to control essential vehicle controls –steering, accelerator and brake by using simple a simple joystick interface. Moreover to assist driver, lane detection module that identifies the lane the vehicle is in and which can be used to guide/warn the driver if there are sudden and abrupt lane diversions.

The work was divided into two phases

1)      Joystick control of vehicle – Involves taking input from user through a joystick and then processing to produce servo commands delivered through Arduino Board.

2)      Autonomous driving in constrained environment – Using input from Kinect sensor to obtain depth image and process the frames in real-time to perform depth based blob detection combining it with vision based blob tracking to perform  a robust obstacle detection.

The system consists of a PC/laptop for processing inputs from joystick. Python has been used for reading inputs from joystick and then invoking appropriate signals to the Arduino Uno Board connected to the laptop. Python has been used as it offers a rich library of joystick control functions. The Arduino board then instructs the servo through commands generated as per the code burnt on the board. This servo’s then, through H-bridges and feedback mechanism control motors for throttle,steering and brake.

A failsafe mechanism to transfer controls to a human driver in case of some fault in processing/execution of algorithm has been developed. The failsafe mechanism involves a wireless remote with a human driver who can switch to manual mode and drive/stop the vehicle as required from outside the vehicle.

Limitations of Solution

  • The vehicle can operate only in indoor environments as the range of Kinect sensor drastically drops in presence of sunlight.
  • There is a time lag in detection of obstacles and execution of motor actions due to hardware constraints.

Click to see some photographs of the prototype vehicle

2.   Orchestable

by Ankur Agrawal:  “Music Orchestra on Multi-touch table” is project aiming to develop a tool for the musician so that they can play the music instruments digitally. The application architecture is framed such, as to provide users a flexibility of using multiple instruments simultaneously with every instrument responding to the users touch gestures and events concurrently. Using this type of architecture more than one user can play the instruments simultaneously. Since, the technology is implemented using a camera it can recognize any number of touches on the touch screen (actually a normal transparent acrylic sheet).

The camera captures the images of the screen, taken from behind. The images are then processed by the image processing module which extracts the touch blobs from the captured frames and pass those blobs on to the socket usually 3333, using the TUIO protocol (A universal protocol for transmission of the multi-touch events). Those TUIO events are then read by the client side software interface where the events get processed for the identification of the gestures and passed to the respective instruments interface for further processing for the response.
The project‟s foremost interface requirement is to make it more intuitive and realistic so as to give users an illusion of playing a real instrument. The response of the instruments is in real time and in synchronization with other instruments.

The inspiration of the orchestra project came from a simple application “GarageBand” made by Apple Inc. for their product “iPad”. The application made by Apple Inc. allows user to select any of the provided instrument for playing. However, users are not allowed to play more than one instrument or more than one instance of the same instrument simultaneously. This project aims at giving user an environment for playing musical instrument digitally with a more realistic user interface, broader accessibility and flexibility than those already existing.

Ankur demonstrated this project at  JED-i 2012 (http://jed-i.in/challenge/) and won 2nd prize in this All India Competition at IISc, Bangalore.

3.   Toolkit for Fabricization

        by Pranjul Sharma:  The project is about creating a tool which could help the fashion industry to visualize the fabrics and then bring the final output rather than printing the cloth and then rejecting it, thereby reducing the waste of cloth. Here in the tool there are three modules:
1. Selecting the pattern from the database of various ethnic designs
2. Changing the channel and color module of the pattern selected.
3. It can form the stencil of the pattern to keep n use further according to user specified color combination.
4. Select a Fabric from the given list of fabrics which would display the output image of the selected pattern with the effect of that particular fabric.

User Interface
● The user just requires basic knowledge of Computer and Fashion.
● Operating computer and basic installation knowledge.
● No extensive training is required.

The tool can run on any computer machine enabled with or supports java environment and should have Adobe Photoshop CS5 in the system.

4.   Skulptorous:  Real time depth estimation and 3d modeling 

       by Aniket Handa and Prateek Sharma: 

Microsoft Kinect provided an immensely detailed depth-map through its sensor and was apt for the purpose of creating an application that uses a depth-map and creates a point cloud of ”live” environment data.

We have   developed ‘Skulpturous’, which uses the depth-data to create 3D models in real-time using the simple gestures of the hand by analyzing the depth-data and concatenating the point clouds in the hardbound nearest region.Skulpturous enables users to model 3D objects in real-time using the gestures of their hands and their voice as input. It allows users to draw 3D models of varied colors and sizes and also allows them to play with primitive objects such as spheres. It is our aim to bring this tool in coherence with other 3D tools so that users can import their creations in popular 3D-processing tools. Keeping this point of view in mind, this we have also successfully written a module for AutoCAD for sculpturing art in real dimensions (3D), which now enables the user to use the advance features of AutoCAD along with our implementation to better visualize their creations and also enhance them using the available functionalities in AutoCAD.

In the form of skulpturous lies a bright future ahead with a truckload of possible applications that can be made using its basic functionalities including modeling of homes, interior decoration, fun applications that use physics engines and interactive applications for children just to name a few.

 

 

 

 

 

 

 

 

 

 

 

 

5.    Painting Process Assistant for Visually Impaired Painters

         by  Saransh Gupta:  This project  explores the possibility of creativity in “Visually Impaired Painters” and provides a novel technology which overcomes their need of human assistance while painting. The possibility of visually challenged painters is not predominant but the children at Institute for Blind, Munirka, proved that it is not negligible either. With a little human assistance they can bring out a new face of creativity, astonishing people and science itself. 

The Technical Assistant is a human-computer interactive technology which takes audio-visual inputs from the surroundings and the user, processes these inputs and provides the user with necessary outputs to facilitate production of efficient and accurate product. Here, specifically, the project is concerned with blind painters and providing them with an assisting tool which could replace a human assistant, reducing the reliability of the user on somebody else.

Technical assistant for blind painters is more like hardware cum software tool or probably like a bag with a stand to hold the camera in place, a set of headphones, a microphone and a set of vibrating bands. It takes input from the surroundings using real-time audio-visuals given by camera and microphone. Output is in the form of variant frequency vibrators, speech and beeps. The inputs taken are processed on the laptop inside the bag to which all these sensors are connected either directly or serially with the help of micro-controllers.
Intelligent algorithms are a part of the tool, making it capable of understanding the drawn shapes. It is capable of predicting graphics using the information already drawn and even acknowledging the user every time he loses the appropriate track. It implements genetic algorithm to fulfill the above purpose and computer graphic algorithms for various shape understandings and transformations.
The tool has additional functionalities like identifying the color palette and guiding the user to choose the correct color, enable the user to know when he mixes two colors or when the color on the brush\finger is finished. It is enabled with features like traversing the drawn path, acknowledging the user when he intersects another drawn area, retracing the painting from one point to another and various other features which may be required by the user.
This is an assistive technology so it places no constrain on the user, all the acknowledgements and warnings can be overlooked by the user, the user has the power to take the final decision. Thus this technology does not inhibit the creativity of the human mind, it just tries to overcome the disability without providing another human assistance in the whole process.

Saransh demonstrated this project at  JED-i 2012 (http://jed-i.in/challenge/)  at IISc, Bangalore as a finalist.

6.  Indian music instrument and type detection

      by Jyoti Mishra:    Indian  Instrument Type Detector is a part of Indian Genre Identification Strategy in the field of Music. Due to large increase in digital musical database, there is a need of a tool which may refine the search according to a parameter. A survey tells us that maximum of the online search is on the basis of genre.
We intended to design a tool for identification/classification of Indian Musical Genre which could take a sound file as input and classify its genre. But in the development procedure we could find no suitable features which can be directly used in the classification process. Hence we are approaching to the problem in a different way and first considering the use of Indian instruments in different genres of music. For this we need to first know which instrument is being played in the particular sound sample. Hence we intend to focus on the instrument classification problem and develop a tool which automatically recognizes the type of instrument being played. For this classification procedure we first extract the low level features from each input file and then classify them. For this classification purpose, a research work is to be performed on the distinct or best features of each category of instrument. These low level features are then used by a classifier to classify the music.
For the classification purposes we have applied the Gaussian Mixture Model (GMM) and Hierarchical Mixture Model (HMM). These models are selected after application of different suggested models for classification and give the maximum accuracy in our case.

The instruments that can be successfully classified by our software are:
o Shehnai
o Dholak
o Saxophone
o Piano
o Congas
o Sitar
o Violin
o Tabla
o Flute

We have implemented a highly specific model and got the accuracy of around 80-85%. Though the accuracy is quite high, the model has got its own drawback. The major drawback is that if we need to increase the instruments which can be classified by the model, one has to evaluate the GMM model each time. If the application of GMM model gives expected results, it is acceptable but if the results are highly inaccurate, one has to again design a new GMM model for the instrument separately and thus implement HGMM in order to keep the previous instrument results unaffected.

Advertisements
Posted in: Uncategorized