My Experience at Hack the North.

For those of you all who haven't heard of Hack the North, I'd say that it is one of the biggest and most well organised Hackathons where around 1000 people from all over the world come together for a 36 hrs hackathon at University of Waterloo, Toronto, Ontario. It is one of the most exciting Major League Hackathons(MLH). This year it happened on 15th, 16th and 17th of September.

During my summer vacations, after doing two wonderful hackathons and getting a little obsessed over hackathons I started thinking about something exciting in it like how about I try for some International hackathon!? That was when I came across Hack the North, and why not apply for it? After all it provides travel reimbursements too :)

So I went ahead and registered for it where I had to upload my resume and write a small essay of maybe 500 words describing about the project I'm proud of and how did I overcome all the difficulties during the project. To my surprise, in the morning of 6th August I woke up with a mail in my inbox saying I got selected! :D

Next up, was go and apply for Canadian Visa. TBH the visa procedure was more challenging than the Hackathon. Till the very end I wasn't sure if I would be able to attend the Hackathon because the visa processing took hell lot of time, it being the peak season. You won't believe that I finally got my visa on 14th September afternoon and I boarded my flight in the night of 14th, that touch and go situation it was! Damn! I was so jet lagged during the Hackathon because from the Toronto Pearson Airport I directly headed towards the University for starting the hackathon that very midnight.

You know what, the PM of Canada, Justin Trudeau was present at the opening ceremony!! His welcoming remarks were the highlight. There were really nice workshops and tech talks happening during the weekend but unfortunately I wasn't free enough to attend them except for the Diversity and Inclusion Panel discussion. Tracy Chou and Cat Noone were my favorite of all the panelists.

This was the beautiful engineering building were we spent the 36 hrs.

Link to all the amazing projects that were build during the hackathon: https://goo.gl/JuyqHg
Link to my team's project: https://goo.gl/sie7W1

Want to see the crowd that gathered near the sponsors to collect the swag :P

I would definitely say that whoever gets an opportunity to participate in the Hack the North, just go for it because it is the best place to network and get to know people from different part of the world. This was actually a really nice learning experience for me right from dealing with visa procedures to traveling alone and changing terminals at the layovers to finally doing a hackathon while being totally jet lagged.

This also gave me an opportunity to visit my relatives living in Toronto and spending time with them visiting places in and around Toronto like the stunning Niagara Falls and the CN Tower.




One week at HackEYEthon

This blog will describe how I spent my last week at Hyderabad. Experienced and got to see a lot of things in just one week. 24th June in the morning I reached Hyderabad from Bangalore by bus. This was a first time experience for me traveling from one unknown city to another unknown city alone by bus in an over night journey.

At Hyderabad my dad's batch-mate also a very good friend arranged a visit to NFC for me on my first day. NFC is Nuclear Fuel Complex where I got to see the manufacturing of fuel rods starting from scratch where natural Uranium is converted to Uranium dioxide pellets that were then filled into the zirconium alloy tubes and after undergoing a lot of tests and processing they were bundled into a fuel assembly. Even the tubes and sheet of various dimensions were manufactured at the same place in such different procedures that I had never heard of before. Obviously a day long plant visit like this is tiring but trust me it was worth while. In the evening I took an Uber to reach Banjara Hills where we were accommodated for the week by L V Prasad Eye Institute. And the real story starts here :P

This year witnessed the fifth edition of Engineering the Eye Hackathon organised by L V Prasad Eye Institute and MIT Media Labs. I was so happy and lucky to get selected for this. So I got to the accommodation and met my two room mates Sneha who is my classmate as well and Divya a very dynamic girl with whom I got introduced for the first time there and never did I imagine that I would be sharing so much of beautiful memories with them over the week.

Next day in the morning all the participants, mentors and the clinical mentors assembled in the auditorium on the 6th floor of the hospital. (well the balcony had a nice view of Hyderabad here...)


We first had a mandatory introductory round where I realized I was among a very diverse and highly knowledgeable group of people of which many were Machine learning enthusiasts (well ML and image processing became the buzz word for the hackathon) from different colleges all over India and knew that this was the best place to network and build connections. After that Dr. Anthony Vipin Das took over the stage and welcomed all of us with such energy and shared the story of how Srujana-The Centre for Innovations was born and the journey so far. If you happen to visit LVPEI ever then you would notice that the ground floor just when you enter the campus is given to Srujana, so that is the amount of importance given to Innovations in Health-Care. A glimpse of Srujana:


Coming back to Dr. Vipin, I am glad that I met him because he is such an amazing doctor, researcher, innovator, TED speaker and I am totally amazed at his research and all his works. Have a look at this to know it for yourself: https://www.youtube.com/watch?v=UzBTBAoAFE0&t=67s

It was a pleasure to hear Dr. Pravin (from Tej Kohli Cornea Institute), Dr. Nandini, Dr. Ashutosh, Dr. Jagdish and Dr. Subhadra (Specialist in preventing blindness among premature babies) speak about their work and constantly encouraging us to take up problems in eye-care and build effective solutions towards it. Following them the 4 team lead technologists of Srujana, Sankalp, Koteshwar, Sandeep and Ashish spoke about their totally commendable work and their projects at Srujana. By now you all might have got an idea that this was not merely a hackathon but also a place where we got introduced to great personalities who guided us totally and were very open to all our questions, like we got to hear Dr. Ramesh Raskar (Head of Camera Culture Group at MIT Media Labs), Prof Ramesh Loganathan (IIIT Hyderabad), Prashant Gupta (from Microsoft) and Mr. Annamalai (from CYIENT) speak. We had Dr. Sangwan (Director of Srujana) talking to us the second day and who was by our side for the rest of the week constantly motivating us and inspiring us with his expanse of knowledge both in his field as well as not from his field. This just proves and reiterates the fact that if you are having an idea and want to do something in med-tech then no one is going to stop you rather you will have a huge pool of people supporting you.

In the second half of the first day of Hackathon all the mentors of the 8 projects that were considered to be a part of the hackathon this year gave a presentation describing their project and the goal of the hackathon. The different projects were ACDC (Anterior Chamer Depth to Corneal Thickness ratio Quantifier that uses Van Henrick Grading System to quantify ratio), BRaVO (Branch Retinal Vein Occlusion that quantified the intensity of occlusion in the patient's retina), OIO (Open Indirect Ophthalmoscope, do read more about this awesome project on Google), Creative Games building for Blind, Chat-Bot (that would collect symptomatic information and mediate between patients and doctors), Corneal Topographer (a portable device that would generate corneal topography in the smartphone itself), KAHT (Keratoconous Analysis and Hypothesis Testing that involves working with ocr libraries and machine learning algorithms to extract useful parameters from scans) and last but not the least DESQ (Dry Eyes Syndrome Quantifier). 

After all the 8 presentations all the participants had to shortlist the project they liked to work for based on their interest and discuss with the mentors and form a better idea about everything. Talking about me, one thing I was clear was with the domain I wanted to work in and that being Image Processing. Based on that I shortlisted ACDC, DESQ AND BRaVO. I went around talking one on one to all the mentors and started giving priorities to these projects. What I observed was many people like me who were interested in Image Processing were inclined towards ACDC may be because the mentors presented the task quite clearly whereas in my case I was inclined towards DESQ because this was completely a new approach and a new idea which involved working closely with the clinical mentor and getting to understand what exactly and how exactly is dry eyes syndrome caused and developing our own approach over the week. Finally after all the discussions that even extended till the evening tea break I was clear with my priorities being DESQ, then ACDC and then BRaVO. Towards the end of the day when the mentors started reviewing and accepting participants into their team, I was very happy to get into DESQ the one which I also wished for.

My experience with Team DESQ:
We were mentored by Aditya, Rohith, Serena and Dr. Nagaraju (our clinical mentor). Meet the cool team:


Dry eyes is becoming a very common disease nowadays. Basically when your eyes don't get sufficient moisture then you might be suffering from dry eyes. Especially in IT sector where laptop screen is what they gaze at for the whole day these kinds of irritations to the eyes generally happen that would eventually lead to dry eyes. Going a little biological, we have a tear film just above the cornea that consists of three layers i.e. the one closest to the eyes is mucin layer, then the watery aqueous layer and then the oily lipid layer. So every time we blink this tear film gets refreshed. If you find a person blinking more than usual then that is probably because his aqueous layer gets evaporated faster and that is why he needs to blink again to refresh it. Our focus was on lipid layer for this hackathon where under proper illumination of the eyes some kind of interference patterns were formed on the lipid layer and by studying those patterns the irregularities in the lipid layer could be quantified. The mech and tronix team worked closely and designed the evenly illuminated dome by finding the exact beam angles of the leds used and developing the right solid works model for it which was then 3 D printed and all the 33 leds soldered onto it in parallel arrangement. Finally this is how it looked:


The task for the image processing team was to extract the region of interest from the image and then based on the different colors present in the lipid layer interference pattern developing a 3 D equivalent model that helped to visualize the irregularities in the thickness of lipid layer. Because based on some previous research a particular color in the interference pattern correspondes to a particular thickness of lipid layer for example blue color means 180 nm, brown means 130 nm thickness of lipid layer and so on. I was a part of this image processing team and this is my work sample:
The original image:

This shows the irregularities in the lipid layer:
Check out the code here: https://github.com/OmPrakash95/Tearoscope/tree/master/MATLAB

There is more to this project like finding NITBUT- Non Invasive Tear Break Up Time (we just started with this but not completed). Because dry eyes can not be diagnosed with just one test, there are multiple tests to it which will only ensure the severity of dry eyes in any patient. Our ultimate goal is to create a cheap and portable device that would act as a single device for multiple tests and finally give the output as not just detecting dry eyes but also quantifying it. We were happy to achieve our hackathon goals and got our device tested on a real patient who had irregular lipid layer and we were successful enough to obtain the interference patterns when viewed through a slit lamp (it is a device used for focusing a part in the eye) with the evenly illuminated dome that was built. A huge thanks to Dr. Nagaraju for all the support throughout the hackathon, I really got to learn a lot.

My hackathon experience doesn't just end here. One day in between as a break for all of us there was a trip to Tudukurthy village arranged where we got to see the primary and secondary vision centers of LVPEI and got to understand how it functions and helps in providing free services to the villagers. Also one evening a hospital tour was arranged which really got me excited because if you see eyes are just a small organ (but a very important one and also the only exposed part of the brain) but a whole big hospital built just for eyes with different big big dedicated departments for Retina, Cornea, Glaucoma etc, it was indeed a nice experience to see the functioning of different devices used for various diagnosis. The best part here being I was the only one who got a chance to test if I have Glaucoma or not using some kind of sophisticated automatic visual field testing device as a part of demonstation obviously. The test lasted for about 4-5 minutes and thankfully I don't have glaucoma and I have a very normal fovea (Google which part of retina is called fovea and which part is called blind spot) sensitivity of 37Db (maximum allowed is 40Db) :P

The last day was the final presentation day where we showcased all that we did throughout the week. During the closing ceremony we got an opportunity to hear from Dr. Gullapalli Rao (Chair of LVPEI) and he started his talk asking the question why are you all here? To which we all answered that we are here to learn, network, create a social impact and so forth. But the answer that the Doctor expected was that we are here to eradicate blindness and all that we answered was a subset of this. This was again an important moment for me where I realised I am a part of this cause and in the journey to help everyone enjoy the beauty and gift of vision. Though I am a tiny part of it now but still I am proud of it and wish to do something really more in future.

To sum it up, it was a wonderful experience meeting so many like minded people and working with them for a week together and exploring the field of biomedical which I wish to pursue ahead. This definitely was a turning point for me in terms of building my research interests in future. And of course how can I forget to mention the yummy Paradise Hyderabadi Biryani! :P

Here's the DESQ group photo that was taken after the successful testing done of our prototype on the patient:


To know anything more feel free to reach out to me personally and I'll be glad to discuss and share whatever I can with you. Thanks :)


Fundamentals of Image Processing demystified! Contd...

Hey again! 
Starting from where I left...

Transform: What is a transform? Yes, you are right it changes a system/signal from one domain to other but that is actually a transformation process. A transform is simply a linear operator. By saying linear it just means that the signal should not change when you view it in other domain. It's like I go from one room to other, though the room is different but I'm the same. Why is a transformation necessary? Because it segregates the frequency which helps to understand the signal more. If a transformation process exists then it's inverse should also exist, i.e. if I go from my room to other room, I should also be able to come back to my room with me being the same throughout. Let me write down some transformation equations:
What do you understand by these three equations? Let me explain you in a dramatic way :P I'm a Mumbaikar and Mumbai is my main domain where I know to manage out everything independently. This is what is f(t) in time domain. If by flight/train or any such means I go to Bangalore, now this is a new place to me and I know nothing here so I'm dependent on someone. If this someone is of a higher authority I stand no chance of enjoying my freedom and would have to adjust myself accordingly. To draw an analogy, flight/train is the transform used, Bangalore is the transformed domain and that someone of higher authority is the exponential function. As I lose my freedom, same way the function looses its time characteristics compared to the huge exponential signal (the rhs of the equation has no time functions). Now why choose an exponential signal? It is so because it is composed of cos and sin signals which are known signals, like the authority who will be my point of contact in Bangalore would be someone known to me. This opens something for us to think...is loosing time going to be good? Just go through the following scribble:
We observe three different signals but taking their Fourier Transform gives us exactly the same frequency composition. This simply happens because in the frequency domain you don't get the feel of time. But in time domain you don't get the feel of frequency as well. So we are in a fix and are not able to figure out which is the original signal from its fourier transform. Coming back to the exponential signal or that someone of higher authority, what if now that someone is having less authority than you, so in such case you can still enjoy your freedom or in this context by making the time of the exponential signal finite, the time characteristics can still be retained now. This is what is STFT-Short time fourier transform. So There is only a minor difference between STFT and FT. In STFT, the signal is divided into small enough segments, where these segments (portions) of the signal can be assumed to be stationary. For this purpose, a window function "w" is chosen. The width of this window must be equal to the segment of the signal where its stationarity is valid. Difference in the equations reflecting this change would be just by replacing 't' by 't-t0' in the exponential power i.e. making the time finite. 

Let's observe one more interesting thing here. We know if we use a window of infinite length, we get the FT, which gives perfect frequency resolution, but no time information. So, 
Wide window ===>good frequency resolution, poor time resolution. 
Narrow window ===>good time resolution, poor frequency resolution.
Let us see that for ourselves below. Here we have taken a non-stationary signal with four different frequency components at different times. 

Now, let's look at its STFT and we find that these four peaks are located at different time intervals
along the time axis. 

The following figure shows four window functions.  I will now show the STFT of the same signal given above computed with these windows. 

First let's look at the first most narrow window. We expect the STFT to have a very good time resolution, but relatively poor frequency resolution and we note that the four peaks are well separated from each other in time. 

Now let's take a wider window:

Even wider:

Note that the peaks are not well separated from each other in time, however, in frequency resolution is much better. Another thing we can infer is that low frequency signals are better resolved when the window function is wider as then we get more frequency resolution and less time resolution while the high frequency signal is better resolved when the window function is narrower as for high frequency i.e fast changing signal a better time resolution is necessary. 


These examples should have illustrated the implicit problem of resolution of the STFT. Anyone who would like to use STFT is faced with this problem of resolution. What kind of a window to use? 

To again make the problem of resolution clear, one cannot know the exact time-frequency representation of a signal, i.e., one cannot know what spectral components exist at what instances of times. What one can know are the time intervals in which certain band of frequencies exist, which is a resolution problem.  The Wavelet transform (WT) solves the dilemma of resolution to a certain extent. I'm not taking details of wavelet transform in this blog though I have already given an idea of it towards the end of my first blog on Neural Networks. 

Digital Filters: The last topic of this discussion. Designing the right filter is very important for any application, and we all realise this. This is again a very huge topic but I'll just summarise it in the following charts. 

It is very good to know everything in depth for designing a filter by various methods (I haven't explained the methods here, just mentioned them above) because that will only allow us to grasp or visualise any problem and get us the right solution to it, though all these methods are just direct functions in any toolbox so you don't need to sit and design these algorithms, all you must know is when and where to use which method. So reading about the filter design frameworks and the different methods for both FIR and IIR from any standard DSP text book would be helpful now. 

With this I wrap up the topic and I hope you were able to follow me throughout this blog and the previous blog. :)
  
  
  


Fundamentals of Image Processing demystified!

Image Processing does sound so cool, isn't it!? But are your signal processing fundamentals strong enough? After all image is also a 2D signal. Image processing can be implemented using various platforms like MATLAB, Python etc, but your expertise doesn't lie in that, what matters is your understanding of the subjects and the clarity of fundamentals which helps a very simple algorithm to be build for any complex problem. So, here I'm not really concerned with explaining what image processing is but I'll make sure I discuss certain concepts that will make learning image processing easy.
Certain key terms that you should be thorough with are:

  • Frequency
  • Signal
  • System
  • Convolution
  • Correlation
  • Transform
  • Digital Filters
Ask yourself what do you understand by all the above terms. Just keep your answer in mind and read through the following explanation to form a better understanding.


Frequency: You are hearing PM Narendra Modi giving a speech and in another channel a cricket commentary. This goes unsaid that you may find the speech to be easier to comprehend with every word being distinctly heard while the commentary to be little difficult to understand. Clearly what makes these two sounds different is their frequencies. Hence, frequency is nothing but the rate of change of a signal. A signal with low frequency contains a lot of information and is said to be slow changing signal while a signal with high frequency might be a little irritating at times to our ears as it is fast changing and so is treated as noise. Same goes when saying babies sleep listening to lullaby rather than a rap song.
Slow changing signal is thus of interest to us but is there any problem you can think of it? Yes, being slow changing it carries very less energy and so would die out soon without reaching a great distance. Well, this can be tackled by either amplifying the signal or doing a frequency modulation. Frequency modulation is nothing but passing the low frequency signal with a high frequency signal that can take it to a greater distance. Like you take your lunch box and eat the food inside that and not the box, right :P
So, frequency plays an important role because looking at the frequency you can understand if that signal is of interest to you or not.
To give an idea of one application, there is an image with a tumor cell that needs to be detected. So, all throughout the image you may find uniformity or the neighboring pixel values changing by a small amount (slow changing) but when a tumor cell is encountered, because it is captured differently than normal cells around it there is a sudden change in pixel value creating a high frequency region. Hence the algorithm will narrow down to just finding the high frequency regions.

Signal: Everyone knows what a signal is. Going by the technical definition it is just a function of one or more independent variable, independent variable like time you can say. It is important to know the characteristics of the signal in order to process it. And it would have different characteristics in time and frequency domains. It is also important to know what type of signal is it like casual, LTI or what. Taking me as an example, in my blog site I'll be talking about technical stuffs but if you find me on Instagram or Twitter you can find me talking about various other stuffs like travel photography, social causes etc. So taking this blog site as one domain, you will get only a little idea about me but if you change the domain you can find my more characteristics. Same goes with any signal, you will have to deal with both time and frequency domain to deduce more characteristics in order to understand the signal better for processing.

System: Anything you see is a system be it a mobile phone, a human body or any device. It is not at all important what happens inside the system but what is important is to have an understanding of the system. By saying understanding I mean you should know what a system does when you give a certain input. You'll attend a call by pressing green and reject using red so this is what is understanding, what happens inside is of no concern to us. Also it is very very necessary to know for what inputs the system will undergo instability. Essentially there are three things that you need to bother about any system, they are: Dynamics of the system i.e. the way system changes with time, Transfer function of the system that actually gives a relation between input and output and finally the stability of the system.

All the signal processing is done in the computer. Real life signal being analog in nature needs to be first converted to digital for giving it as input to the computer. Knowing what analog to digital conversion and vice-versa is thus essential. Any analog signal is first sampled or discretised in time. At this step following the sampling theorem is important (fs>2fm) because when you do the reverse to obtain analog from digital if initially sampling wasn't done properly then here you'll lose or overlap the information called aliasing effect. After sampling you know it is quantisation where selecting the no. of quantisation levels is important and is given by 2^n where n is the no. of bits.

Convolution: This is the best part. It is actually the origin of every digital signal processing algorithm. First giving the mathematical understanding:
So, 'x' basically denotes any signal along with noise and 'h' is the impulse response of the system. Filter can be the system, and actually convolution is a filtering process. How do you view it is this way: you want to approach your senior for some guidance but you don't know that person so first you'll talk to others who know him and form a reference response and by taking the reference being impulse it becomes the impulse response. After that you'll approach and finally form a complete response taking your as well as reference into consideration. You being the 'x' input and that senior being the system with the reference response 'h' and finally your output response is a convolution giving 'y' as resultant response. This is an explanation for 1D convolution. Now as we observed we need to do flip, shift, multiply, add all in one cycle which gets tedious so in such situation you change the domain and do a simple multiplication which gives the same result, this is actually a property of convolution.

Correlation: This gives degree of similarity between two signals in the form of percentage. There are two types of correlation: Auto correlation and Cross correlation. Auto correlation gives the similarity between a signal and the same signal after a certain period of time. Therefore allows to predict ahead of time. (note: convolution was back in time) Cross correlation gives degree of similarity between any two signals. Giving few application specific examples: There is a patient under observation whose ECG is taken every hour and compared with previous to draw similarity and predict future response that would help detect any heart problem. Speaker recognition also is based on this concept, where my speech is compared with my own saved speech and similarity is drawn. Same way in bio-metrics, degree of similarity between the fingerprints are found, this threshold is set like in highly secure places the degree of similarity required might be 90% and so on.

There are two more concepts, Transforms and Digital Filters that I'm yet to cover which I'm keeping for the next blog as I don't want to unnecessarily make this blog very long. If you want to dig more into these concepts sure do, there is hell lot to explore. Everything I have written can be explained by maths and formulas which I have avoided here.



Shape your project ideas to Empower The Nation!

Here is a blog on my weekend. 


On 11th May, 1998 India achieved a major technological breakthrough by successfully carrying out nuclear tests at Pokhran. Also first, indigenous aircraft "Hansa-3" was test flown at Bangalore on this day and India also performed successful test firing of the Trishul missile on the same day. Considering these technological achievements on a particular date i.e. 11th May, the day of 11th May was chosen to be commemorated as National Technology Day.
On account of National Technology Day, there was an exhibition of BARC technologies in the field of Electronics and Computers held on the weekend. There were around 34 amazing exhibits under the theme Electronics and Computer Technology- Empowering the Nation. 

Just to throw some light on few of the projects:
  • Seeker for BrahMos Missile
    • Missile hits the target but the target is also intelligent enough to predict an attack and shift its position. Controlling the target is not in our hands (obviously, because it is the enemy) but what we could rather be doing is detect the moving target and hit it at the right coordinate. So this seeker which is located at the apex of the missile does this job. It is actually a transceiver which works on monopulse detection method and then applying some ratiometric analysis the coordinates of the target is determined with great accuracy. 
  • Robot assisted Neuro Navigation and Neurosurgical Suite
    • For a normal surgery to be carried out the doctor needs to cut open a part of the skull so that it can be seen properly inside while performing the surgery. Whereas the idea of this project lies in finding out the coordinates inside the skull where the operation needs to be done through normal CT scan and then making a very small hole just so that the surgical needle gets in. There are certain markers like reference points that would assist in getting the needle to the right position. Now all this can be viewed onto the monitor and the doctor can see how the robotic arm with the needle is moving into the hole and inside the skull. And this monitor image of the skull can be cross sectioned in whichever way so that the doctor gets the idea of inside the skull without actually performing the cross section on the patient. This whole task could be either automated or performed manually depending upon the surgical requirements. This was developed on ROS (Robotic Operating System) environment. I wish I could have clicked pictures of the demonstration to give you all a better idea.  
  • Digital Medical Imaging System
    • Through conventional X-Rays a film is produced which is then examined by the doctors. But with this a digital image of the X-Ray is produced directly onto the computer which can be sent to the doctor anywhere. For different examination purposes different types of machines are required like for angiography, radiography, tomography etc. So this is like one machine for all purpose. It is like a bed where the patient lies and from below it the X-Rays are emitted. There is a flat panel which is a stack of multiple layers performing different functions on the top where the X-rays are received. The first layer being the scintillator which converts x-rays to light and then there is a CCD (charge coupled device) layer which can convert the analog light signal to digital output that can be given to the computer. Even taking X-ray videos becomes easy. Not saying that this technology doesn't exist, there are big hospitals that have it installed but the specialty of this product lies in being indigenous and being really affordable. 
  • Electronic Nose
    • This device imitates a nose by detecting the presence of different gases. It has a sensor array with each segment being a different semiconductor composition in order to detect a particular gas. Like the segment having Sn02:CuO film can detect H2S gas. It is basically a chemiresistive sensor. When H2S reacts with this film CuS is formed which is metallic and hence electrical resistance changes, this change is calibrated and recorded as concentration in ppm. This is a reversible reaction because of which it can be reused. On the similar line other segments in the sensor array also work for different gases. A particular gas can also be detected by multiple segments and so in such case PCA(Principle Component Analysis) is carried out to determine the gas. 
  • Application of Signal Processing in Health Monitoring of Buried Oil pipelines
    • There is a caterpillar like device which is inserted from one end of the pipeline and taken out to process the data it collected throughout its journey from the other end. It works on the principle of detecting change in magnetic flux and eddy current using hall effect sensors. It is an in house system having a magnetic module, DAQ(data acquisition system) and power module inbuilt. When there is a crack the magnetic flux increases so this way the data is recorded and health is monitored. 
  • Object Identification and Face Detection
    • If you had read my previous post then this is just an application of Neural networks. The training data which identified 80 different objects was taken from Microsoft coco database. Normally for images CNN(convolutional neural networks) becomes handy and hence that idea was only applied. Backpropagation algorithm was used for the training. The training took about 200 hrs that too on GPUs! Idea being each pixel value is given to a separate input neuron and the output layer has 80 neurons to detect the probability of 80 different objects. The number of hidden layers and the number on neurons in each layer is a matter of trial and error. Once the training is done and the correct weights of the neural net are recorded, this architecture can be passed onto any other device like a mobile and now if any random image is provided then it can easily detect which object it is within just 5-10 secs. 
  • Depth estimation and application in Robotics
    • Depth estimation is a fundamental problem in robotics and computer vision. This project used Laser assisted Stereo Vision. We have two eyes for a purpose and that is to realize the depth. Also if we view an object which is placed near from only one eye and then close that eye and view it from other eye, the shift in the object is more compared to when the object is placed far away (try it out yourself). Stereo cameras are used generally for depth estimation but with the disadvantage of being very difficult to calibrate whereas using laser would be very helpful here. For the algorithm building various principles were used like triangulation principle and time of flight principle and the shift or the error was estimated and a depth map was generated. The end product was a cart having a laser beam generator and detector with a span of 0 to 270 degree and as it navigates it generates the map of the whole area on the screen using Simultaneous Localisation and Mapping (SLAM) method. All this was developed on ROS (distributed system).
  • X-Ray 3D Computed Tomography for Non-destructive Evaluation
    • The idea in one sentence is taking X-Ray images that are grey scale images at different angles and by applying some back-projection algorithm the whole volume of the object is constructed. This can be performed using VTK software. 
I've tried describing few of the projects above, at least their ideas and because I'm from Electronics background my description would have been a little Electronics centric and not much of Computers. To name a few more projects: 
  • Cyber Security Monitoring in BARC
  • Handheld Biosensor for detecting Methyl Parathion Pesticides
  • Body Composition Analyser
  • Speech Analytics with Machine Learning
  • Aerial Radiation Monitoring Systems
  • Real time intelligent perimeter intrusion detection system
All in all the weekend was amazing and I spent a total of 5 hours at the exhibition absorbing myself in the wide spectrum of emerging technologies and appreciating all of them at the same time thinking of possible ways through which I can contribute in this field. 

Cheers!!

The Idea of Neural Networks

Last semester I had this very interesting course on Neural Networks and Fuzzy Logic. A big thanks to our professor to make the course as interesting as it sounds.

Do we all not realize our brain is a super computing massively fascinating biological entity. The way we build memories and connect the dots later to figure things out is not at all easy as it sounds. We made computers to do things that we are not good at, like huge mathematical computations. But imagine the power of computers that can learn and make decisions the way humans do. That's what it is! Artificial Intelligence!

Through this post I would straightaway jump into the concepts and the key points. Dividing it into ten sub topics:

  1. Sigmoid Neurons
  2. Backpropagation Algorithm
  3. Training vs Testing Error
  4. RBF: Radial Basis Function
  5. PCA: Principle Component Analysis
  6. Supervised vs Unsupervised learning
  7. Classification vs Regression
  8. SVM: Support Vector Machine
  9. CNN: Convolutional Neural Network
  10. Fourier vs Wavelets
Let me first share the doc link in which I have summarized every lecture- https://docs.google.com/document/d/1hVTkZZg0dv_VdqvnsozQwxmCHgarCSnuo0zpEwmV2Tk/edit?usp=sharing

Here I share my scribbles of the sub topics I just mentioned.














I could have made this post very elaborate but I limited myself in just providing the key ideas. I'll share few references here:
http://videolectures.net/DLRLsummerschool2018_toronto/
https://drive.google.com/drive/folders/0B41Zbb4c8HVyUndGdGdJSXd5d3M
https://www.udacity.com/course/intro-to-machine-learning--ud120
http://neuralnetworksanddeeplearning.com/about.html
https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

This subject is very interesting and the more you explore the more you'll understand. So, enjoy the exploration folks! 

My first Code-along workshop

I had one of the most satisfying Saturday last weekend and that feeling is the reason I'm writing a blog after almost a year. I often...