My first Code-along workshop

I had one of the most satisfying Saturday last weekend and that feeling is the reason I'm writing a blog after almost a year.

I often come across the quote "If someone offers you an amazing opportunity and you are not sure you can do it, say yes - then learn how to do it later." by Richard Branson, and what I did was exactly that. Being a WiDS Mumbai Ambassador for two years in a row now, I sometimes can't believe how much exposure and confidence it has given me just by making use of the opportunity at the right time.

WiDS - Women in Data Science with a mission of INSPIRE. EDUCATE. SUPPORT has created a very huge platform for women in this field to come together and spread the knowledge they have and take pride in sharing what they know which I believe is the best way of feeling good about yourself and your capabilities. It is just a small networking session at the conference that it takes to know about others perspective and share what you know just to realise when all these thoughts and ideas come together what wonders it can make.

Here in WiDS Mumbai, we realised that although we are doing justice to INSPIRE and SUPPORT through our annual conference, we can do more to EDUCATE to make the mission complete. That's when Khushboo, my co-ambassador pushed me and told me to take the first workshop as part of WiDS Mumbai for students at her alma mater and I couldn't be any more thankful to this opportunity that made me realise that I genuinely enjoyed taking the workshop for the whole of 2 hours.



My workshop was a hands-on one where I started first with a brief on supervised and unsupervised learning approaches and then came to my topic of neural networks. I started by first explaining what a neuron is and how it functions and what is its activation function or what do you mean by it and then gave a very high-level overview on how a neural network can learn through its weights and bias adjustments and just scratched the surface of back-propagation algorithm and gradient descent algorithm.

In order to get the intuition right about a neural network among the students I made them code along with me a simple neural network that learns a sine function, and at first I purposely selected parameters that would end up in an overfitting scenario that helped me explain this very important concept. Next, I took up a brief on Convolutional Neural Network and its layers (convolution, pooling, dropout, flatten, regular nets) that was also implemented for the very simple digit recognition task using MNSIT dataset. I, of course, used the Keras API and its functions to build the neural network which is very easy to understand and made sure that I explained all the arguments used like a loss function, epoch, batch size, optimizer and so on.

Towards the end, we had a discussion on various applications of CNN and I also demonstrated one application that was the code I did for WiDS Datathon 2019 on the oil palm plantation problem statement using satellite images that was hosted on Kaggle where I could introduce two other concepts of dealing with unbalanced dataset and how useful transfer learning can be because I had used a pre-trained CNN model in my code. Here is the link to my workshop notebook: https://github.com/WiDSMumbai/WiDS_Workshop/blob/master/workshop.ipynb

At the end of the workshop I was glad that I could cover all that I prepared in dot 2 hours and was also happy that I could cater to all the doubts asked in between and just to know out of my curiosity that how much the students really understood, I did ask a few questions in between and their chorus absolutely right answers definitely made me smile instantly.

Thank you for reading till the end, and I hope to take more workshops in future.

I'd also like you all to read my co-ambassador Julian's blog on WiDS Mumbai 2019 that highlights the 9 amazing speakers we had https://medium.com/@jsj14/wids-mumbai-2019-bacbf0b673cf and you can find their presentations here.

See you in our future events :)



My Experience at MIT Grand Hack 2018

I am deeply interested in solving healthcare related problems using the advances in technology and so I knew applying for MIT Grand Hack which is the largest healthcare hackathon held at MIT Media Labs would provide me a great platform to explore this. Application to this hackathon involved filling out the google doc which had a few questionnaire like briefly explaining your skills, projects, ideas etc. They were sending out the results on a rolling basis so it’s good to apply for it as early as possible. I was selected for it and got notified a month prior to the hackathon that was held on 13-15 April, leaving me with enough time for visa applications and other bookings. 


I reached Boston two days before the event so I had enough time to explore the MIT and Harvard campus and experience how the student life there is. The campus was beautiful, situated across the Charles river and full of students always busy doing something or the other. 


On 12th evening a pre-hack event was organised in the Beautiful Seaport district of Boston at Continuum Design centre so that all the participants get to network with each other and meet few of the mentors and discuss various ideas. I was amazed to see people from varied backgrounds like medical, pharma, business, computer science, mechanical engineering, academia etc and also of a wide age group right from high schoolers to retired gentlemen. 


Next day in the evening the event started. There were two tracks one was of Global health and the other of Connected Health. I was placed in the connected health track. There were few great keynote speakers from many Boston based healthcare startups and firms. There were also inspirational doctors addressing all the participants. 

It all started with the idea pitching and team formation session wherein anyone who had an idea could come up and pitch it and the others can see which idea they like and want to work for. I in particular liked the idea that Mike pitched that was about trying to structure the unstructured clinical trial data. During the team formation and networking session where people just roamed around discussing and trying to find a right team, I realized I wanted to work for Mike's idea more than anyone else's idea. We finally had our team with me being the only undergrad student with almost no experience and the others Mike, Amy, Raymond, Suhas and Rahul all pretty much experienced in their fields. One very good thing about our team was the diversity with people being from software, data science, pharma, and managerial backgrounds. 

Over the weekend with a lot of brainstorming sessions within the team as well as with a lot of mentors, working on the technicalities and the business model for the idea and finally shaping the idea perfectly for the final pitching session I genuinely learnt a lot. It is always great to be in a team with a lot of experienced members and the best part being all of them so down to earth. Honestly I might not have contributed really to the ideation but I learned a lot looking at the idea from a fresher's eye. Totally grateful to my entire team. 

Giving a brief of the idea, TRIAL CONNECT that is our project name is a platform that would quickly match the right patient to the right trial. Because if you see, a clinical trial has a lot of inclusion and exclusion criteria and all of these information being very unstructured makes it difficult and a lengthy process to find the right set of patients for conducting the trial. In the US all the trials need to be registered with the clinicaltrial.gov so that becomes the database for all the trials for us, and the patient data comes from the EHR/EMR systems. Using Natural language processing and Machine learning algorithms Trial connect plays the role of an interface between both the databases and fetches the best match as a result. There is more to this idea but all in all it is a solution that provides value to multiple stakeholders like patients, physicians, EMR companies, CROs (Contract Research Organisation) and the pharma companies. 


And guess what? We did win the theme prize of 500$ at the end. Happy happy :) 
Mike and Amy really did amazing during the final pitching session and Amy made the presentation like a professional, I mean these soft skills are just so important and plays the key role to the winning. Something that is now Noted in my mind. 



At the end I'd just say anyone who gets this opportunity, make the best of it and whatever it is its always going be to a nice experience meeting and interacting with so many people and who would anyway want to miss the opportunity of visiting MIT and the beautiful city of Boston to which I am already in love with. 


Thanks for reading :) 








The conference day! Women in Data Science.



Women in Data Science, Mumbai Conference, 10th March


This year I tried something different. It all started with one thought that pricked me, Data Science is an interesting field, there is just so much that keeps happening in this field, what if I get to build a community where people get to understand how working in this field feels like and the various projects that can be undertaken in it. That was when I came across Women in Data Science, a Global Conference organised by the Stanford University, ICME Department. Being a strong supporter of diversity, I felt this is just the right platform I needed, so I reached out to the co-ordinators and expressed my willingness to take a stand and organize a similar conference in Mumbai with their constant support. Meeting my co-ambassadors Julian, Khushboo and Aqsa and working together to make it happen was a journey full of learning, encouraging, meeting new people and finally successfully pulling off this conference. Here goes the highlights! 

Recently Forbes stated that the biggest event on the data science community calendar is the one that showcases women in the field. Global Women in Data Science is the largest analytics conference on Earth. The first edition of Mumbai WiDS conference held on 10th of March at VJTI College featured 8 female speakers and was attended by around 105 participants composed of a mixed group of students, industry experts and researchers.

The conference started off with the WiDS ambassadors welcoming all the attendees and placing the agenda for the day. 

The Keynote speaker, Mrunali Sathe who is a business oriented leader addressed the audience and talked about taking control of the narrative. She totally encouraged the women to look for growing up in what they are doing and the importance of data in the business. 

The next keynote speaker, Shweta Doshi talked about her EdTech Startup journey and inspired the audience and made them realize how important a good networking helps to achieve more and more. 


(Left-Mrunali Sathe, Right-Shweta Doshi)

Kritika Jalan a business analyst, our next speaker gave a brief overview on the learning path to become a data scientist giving example of her own learning curve. It is the interest that drives you to learn more in this field and it is never too late to start off. 



Dr. Anjali Chopra who is a consumer insights specialist and educationalist spoke next and thoroughly grabbed the attention of the audience by sharing her vast knowledge in Role of Marketing Analytics and Text Mining in targeting the right customer. Through various case studies she explained the role of logistic regression and how to interpret the odds ratio. She briefed about using the data modelling platform called RapidMiner and about validating the results. 


Our next speaker Sarah Masud who is a data scientist at RedHat talked on a very interesting topic of seven sins of a data science newbie and how not to commit them. She covered 7 various misconceptions that a college student working on data science projects would have and gave the reality check of after college work environment. It was very insightful. 

Link to Slides: 


We broke off for the networking lunch at this point and also tried to cover the delayed broadcast of Latanya Sweeney’s keynote address from the main Stanford WiDS conference held on 5th of March during the lunch. 


Post the lunch break Saloni Bhogale who is a Young India Fellow at Ashoka University took charge of the dais. Knowing about her work with Political Data and Parliamentary Research was very fascinating. It definitely opened up a new interest among the audience to work with such data. 



Next up was Sahiba Chopra who is a data scientist at SugarBox, she talked on the intersection of behavioral economics and data science. How the social media feeds get tailored with news using the user’s confirmation bias and noticing that fake news encounters maximum engagement makes us realise that we need to cross check whatever we share online. 

Link to Slides: 


The last speaker was Nitika Goel, a data scientist at FlexiLoans, who shared insights from the FinTech world. She gave an overview of Natural Language Processing (NLP) methods used to lend money to the right customers. 

Link to Slides: 


To sum it up, it was a very engaging day that opened up new interests in everybody and it was great to see the amazing networking going on among all the attendees. We also had prizes for a small filler quiz session and the best tweet of the conference. Huge thanks to our sponsors, Fractal Analytics, VJTI Alumni Association, DataGiri and GreyAtoms for all the support. 

It was amazing working in this team! (From left to right- Khushboo, Aqsa, myself and Julian) 


Thanks a lot to Judy and Ashley from WiDS Stanford to constantly supporting us and giving us this opportunity.

Just as an end note, if anyone is in support of this initiative and wants it to grow more next year can feel free to hit me up for further discussions.

Cheers!!

My Experience at Hack the North.

For those of you all who haven't heard of Hack the North, I'd say that it is one of the biggest and most well organised Hackathons where around 1000 people from all over the world come together for a 36 hrs hackathon at University of Waterloo, Toronto, Ontario. It is one of the most exciting Major League Hackathons(MLH). This year it happened on 15th, 16th and 17th of September.

During my summer vacations, after doing two wonderful hackathons and getting a little obsessed over hackathons I started thinking about something exciting in it like how about I try for some International hackathon!? That was when I came across Hack the North, and why not apply for it? After all it provides travel reimbursements too :)

So I went ahead and registered for it where I had to upload my resume and write a small essay of maybe 500 words describing about the project I'm proud of and how did I overcome all the difficulties during the project. To my surprise, in the morning of 6th August I woke up with a mail in my inbox saying I got selected! :D

Next up, was go and apply for Canadian Visa. TBH the visa procedure was more challenging than the Hackathon. Till the very end I wasn't sure if I would be able to attend the Hackathon because the visa processing took hell lot of time, it being the peak season. You won't believe that I finally got my visa on 14th September afternoon and I boarded my flight in the night of 14th, that touch and go situation it was! Damn! I was so jet lagged during the Hackathon because from the Toronto Pearson Airport I directly headed towards the University for starting the hackathon that very midnight.

You know what, the PM of Canada, Justin Trudeau was present at the opening ceremony!! His welcoming remarks were the highlight. There were really nice workshops and tech talks happening during the weekend but unfortunately I wasn't free enough to attend them except for the Diversity and Inclusion Panel discussion. Tracy Chou and Cat Noone were my favorite of all the panelists.

This was the beautiful engineering building were we spent the 36 hrs.

Link to all the amazing projects that were build during the hackathon: https://goo.gl/JuyqHg
Link to my team's project: https://goo.gl/sie7W1

Want to see the crowd that gathered near the sponsors to collect the swag :P

I would definitely say that whoever gets an opportunity to participate in the Hack the North, just go for it because it is the best place to network and get to know people from different part of the world. This was actually a really nice learning experience for me right from dealing with visa procedures to traveling alone and changing terminals at the layovers to finally doing a hackathon while being totally jet lagged.

This also gave me an opportunity to visit my relatives living in Toronto and spending time with them visiting places in and around Toronto like the stunning Niagara Falls and the CN Tower.




One week at HackEYEthon

This blog will describe how I spent my last week at Hyderabad. Experienced and got to see a lot of things in just one week. 24th June in the morning I reached Hyderabad from Bangalore by bus. This was a first time experience for me traveling from one unknown city to another unknown city alone by bus in an over night journey.

At Hyderabad my dad's batch-mate also a very good friend arranged a visit to NFC for me on my first day. NFC is Nuclear Fuel Complex where I got to see the manufacturing of fuel rods starting from scratch where natural Uranium is converted to Uranium dioxide pellets that were then filled into the zirconium alloy tubes and after undergoing a lot of tests and processing they were bundled into a fuel assembly. Even the tubes and sheet of various dimensions were manufactured at the same place in such different procedures that I had never heard of before. Obviously a day long plant visit like this is tiring but trust me it was worth while. In the evening I took an Uber to reach Banjara Hills where we were accommodated for the week by L V Prasad Eye Institute. And the real story starts here :P

This year witnessed the fifth edition of Engineering the Eye Hackathon organised by L V Prasad Eye Institute and MIT Media Labs. I was so happy and lucky to get selected for this. So I got to the accommodation and met my two room mates Sneha who is my classmate as well and Divya a very dynamic girl with whom I got introduced for the first time there and never did I imagine that I would be sharing so much of beautiful memories with them over the week.

Next day in the morning all the participants, mentors and the clinical mentors assembled in the auditorium on the 6th floor of the hospital. (well the balcony had a nice view of Hyderabad here...)


We first had a mandatory introductory round where I realized I was among a very diverse and highly knowledgeable group of people of which many were Machine learning enthusiasts (well ML and image processing became the buzz word for the hackathon) from different colleges all over India and knew that this was the best place to network and build connections. After that Dr. Anthony Vipin Das took over the stage and welcomed all of us with such energy and shared the story of how Srujana-The Centre for Innovations was born and the journey so far. If you happen to visit LVPEI ever then you would notice that the ground floor just when you enter the campus is given to Srujana, so that is the amount of importance given to Innovations in Health-Care. A glimpse of Srujana:


Coming back to Dr. Vipin, I am glad that I met him because he is such an amazing doctor, researcher, innovator, TED speaker and I am totally amazed at his research and all his works. Have a look at this to know it for yourself: https://www.youtube.com/watch?v=UzBTBAoAFE0&t=67s

It was a pleasure to hear Dr. Pravin (from Tej Kohli Cornea Institute), Dr. Nandini, Dr. Ashutosh, Dr. Jagdish and Dr. Subhadra (Specialist in preventing blindness among premature babies) speak about their work and constantly encouraging us to take up problems in eye-care and build effective solutions towards it. Following them the 4 team lead technologists of Srujana, Sankalp, Koteshwar, Sandeep and Ashish spoke about their totally commendable work and their projects at Srujana. By now you all might have got an idea that this was not merely a hackathon but also a place where we got introduced to great personalities who guided us totally and were very open to all our questions, like we got to hear Dr. Ramesh Raskar (Head of Camera Culture Group at MIT Media Labs), Prof Ramesh Loganathan (IIIT Hyderabad), Prashant Gupta (from Microsoft) and Mr. Annamalai (from CYIENT) speak. We had Dr. Sangwan (Director of Srujana) talking to us the second day and who was by our side for the rest of the week constantly motivating us and inspiring us with his expanse of knowledge both in his field as well as not from his field. This just proves and reiterates the fact that if you are having an idea and want to do something in med-tech then no one is going to stop you rather you will have a huge pool of people supporting you.

In the second half of the first day of Hackathon all the mentors of the 8 projects that were considered to be a part of the hackathon this year gave a presentation describing their project and the goal of the hackathon. The different projects were ACDC (Anterior Chamer Depth to Corneal Thickness ratio Quantifier that uses Van Henrick Grading System to quantify ratio), BRaVO (Branch Retinal Vein Occlusion that quantified the intensity of occlusion in the patient's retina), OIO (Open Indirect Ophthalmoscope, do read more about this awesome project on Google), Creative Games building for Blind, Chat-Bot (that would collect symptomatic information and mediate between patients and doctors), Corneal Topographer (a portable device that would generate corneal topography in the smartphone itself), KAHT (Keratoconous Analysis and Hypothesis Testing that involves working with ocr libraries and machine learning algorithms to extract useful parameters from scans) and last but not the least DESQ (Dry Eyes Syndrome Quantifier). 

After all the 8 presentations all the participants had to shortlist the project they liked to work for based on their interest and discuss with the mentors and form a better idea about everything. Talking about me, one thing I was clear was with the domain I wanted to work in and that being Image Processing. Based on that I shortlisted ACDC, DESQ AND BRaVO. I went around talking one on one to all the mentors and started giving priorities to these projects. What I observed was many people like me who were interested in Image Processing were inclined towards ACDC may be because the mentors presented the task quite clearly whereas in my case I was inclined towards DESQ because this was completely a new approach and a new idea which involved working closely with the clinical mentor and getting to understand what exactly and how exactly is dry eyes syndrome caused and developing our own approach over the week. Finally after all the discussions that even extended till the evening tea break I was clear with my priorities being DESQ, then ACDC and then BRaVO. Towards the end of the day when the mentors started reviewing and accepting participants into their team, I was very happy to get into DESQ the one which I also wished for.

My experience with Team DESQ:
We were mentored by Aditya, Rohith, Serena and Dr. Nagaraju (our clinical mentor). Meet the cool team:


Dry eyes is becoming a very common disease nowadays. Basically when your eyes don't get sufficient moisture then you might be suffering from dry eyes. Especially in IT sector where laptop screen is what they gaze at for the whole day these kinds of irritations to the eyes generally happen that would eventually lead to dry eyes. Going a little biological, we have a tear film just above the cornea that consists of three layers i.e. the one closest to the eyes is mucin layer, then the watery aqueous layer and then the oily lipid layer. So every time we blink this tear film gets refreshed. If you find a person blinking more than usual then that is probably because his aqueous layer gets evaporated faster and that is why he needs to blink again to refresh it. Our focus was on lipid layer for this hackathon where under proper illumination of the eyes some kind of interference patterns were formed on the lipid layer and by studying those patterns the irregularities in the lipid layer could be quantified. The mech and tronix team worked closely and designed the evenly illuminated dome by finding the exact beam angles of the leds used and developing the right solid works model for it which was then 3 D printed and all the 33 leds soldered onto it in parallel arrangement. Finally this is how it looked:


The task for the image processing team was to extract the region of interest from the image and then based on the different colors present in the lipid layer interference pattern developing a 3 D equivalent model that helped to visualize the irregularities in the thickness of lipid layer. Because based on some previous research a particular color in the interference pattern correspondes to a particular thickness of lipid layer for example blue color means 180 nm, brown means 130 nm thickness of lipid layer and so on. I was a part of this image processing team and this is my work sample:
The original image:

This shows the irregularities in the lipid layer:
Check out the code here: https://github.com/OmPrakash95/Tearoscope/tree/master/MATLAB

There is more to this project like finding NITBUT- Non Invasive Tear Break Up Time (we just started with this but not completed). Because dry eyes can not be diagnosed with just one test, there are multiple tests to it which will only ensure the severity of dry eyes in any patient. Our ultimate goal is to create a cheap and portable device that would act as a single device for multiple tests and finally give the output as not just detecting dry eyes but also quantifying it. We were happy to achieve our hackathon goals and got our device tested on a real patient who had irregular lipid layer and we were successful enough to obtain the interference patterns when viewed through a slit lamp (it is a device used for focusing a part in the eye) with the evenly illuminated dome that was built. A huge thanks to Dr. Nagaraju for all the support throughout the hackathon, I really got to learn a lot.

My hackathon experience doesn't just end here. One day in between as a break for all of us there was a trip to Tudukurthy village arranged where we got to see the primary and secondary vision centers of LVPEI and got to understand how it functions and helps in providing free services to the villagers. Also one evening a hospital tour was arranged which really got me excited because if you see eyes are just a small organ (but a very important one and also the only exposed part of the brain) but a whole big hospital built just for eyes with different big big dedicated departments for Retina, Cornea, Glaucoma etc, it was indeed a nice experience to see the functioning of different devices used for various diagnosis. The best part here being I was the only one who got a chance to test if I have Glaucoma or not using some kind of sophisticated automatic visual field testing device as a part of demonstation obviously. The test lasted for about 4-5 minutes and thankfully I don't have glaucoma and I have a very normal fovea (Google which part of retina is called fovea and which part is called blind spot) sensitivity of 37Db (maximum allowed is 40Db) :P

The last day was the final presentation day where we showcased all that we did throughout the week. During the closing ceremony we got an opportunity to hear from Dr. Gullapalli Rao (Chair of LVPEI) and he started his talk asking the question why are you all here? To which we all answered that we are here to learn, network, create a social impact and so forth. But the answer that the Doctor expected was that we are here to eradicate blindness and all that we answered was a subset of this. This was again an important moment for me where I realised I am a part of this cause and in the journey to help everyone enjoy the beauty and gift of vision. Though I am a tiny part of it now but still I am proud of it and wish to do something really more in future.

To sum it up, it was a wonderful experience meeting so many like minded people and working with them for a week together and exploring the field of biomedical which I wish to pursue ahead. This definitely was a turning point for me in terms of building my research interests in future. And of course how can I forget to mention the yummy Paradise Hyderabadi Biryani! :P

Here's the DESQ group photo that was taken after the successful testing done of our prototype on the patient:


To know anything more feel free to reach out to me personally and I'll be glad to discuss and share whatever I can with you. Thanks :)


Fundamentals of Image Processing demystified! Contd...

Hey again! 
Starting from where I left...

Transform: What is a transform? Yes, you are right it changes a system/signal from one domain to other but that is actually a transformation process. A transform is simply a linear operator. By saying linear it just means that the signal should not change when you view it in other domain. It's like I go from one room to other, though the room is different but I'm the same. Why is a transformation necessary? Because it segregates the frequency which helps to understand the signal more. If a transformation process exists then it's inverse should also exist, i.e. if I go from my room to other room, I should also be able to come back to my room with me being the same throughout. Let me write down some transformation equations:
What do you understand by these three equations? Let me explain you in a dramatic way :P I'm a Mumbaikar and Mumbai is my main domain where I know to manage out everything independently. This is what is f(t) in time domain. If by flight/train or any such means I go to Bangalore, now this is a new place to me and I know nothing here so I'm dependent on someone. If this someone is of a higher authority I stand no chance of enjoying my freedom and would have to adjust myself accordingly. To draw an analogy, flight/train is the transform used, Bangalore is the transformed domain and that someone of higher authority is the exponential function. As I lose my freedom, same way the function looses its time characteristics compared to the huge exponential signal (the rhs of the equation has no time functions). Now why choose an exponential signal? It is so because it is composed of cos and sin signals which are known signals, like the authority who will be my point of contact in Bangalore would be someone known to me. This opens something for us to think...is loosing time going to be good? Just go through the following scribble:
We observe three different signals but taking their Fourier Transform gives us exactly the same frequency composition. This simply happens because in the frequency domain you don't get the feel of time. But in time domain you don't get the feel of frequency as well. So we are in a fix and are not able to figure out which is the original signal from its fourier transform. Coming back to the exponential signal or that someone of higher authority, what if now that someone is having less authority than you, so in such case you can still enjoy your freedom or in this context by making the time of the exponential signal finite, the time characteristics can still be retained now. This is what is STFT-Short time fourier transform. So There is only a minor difference between STFT and FT. In STFT, the signal is divided into small enough segments, where these segments (portions) of the signal can be assumed to be stationary. For this purpose, a window function "w" is chosen. The width of this window must be equal to the segment of the signal where its stationarity is valid. Difference in the equations reflecting this change would be just by replacing 't' by 't-t0' in the exponential power i.e. making the time finite. 

Let's observe one more interesting thing here. We know if we use a window of infinite length, we get the FT, which gives perfect frequency resolution, but no time information. So, 
Wide window ===>good frequency resolution, poor time resolution. 
Narrow window ===>good time resolution, poor frequency resolution.
Let us see that for ourselves below. Here we have taken a non-stationary signal with four different frequency components at different times. 

Now, let's look at its STFT and we find that these four peaks are located at different time intervals
along the time axis. 

The following figure shows four window functions.  I will now show the STFT of the same signal given above computed with these windows. 

First let's look at the first most narrow window. We expect the STFT to have a very good time resolution, but relatively poor frequency resolution and we note that the four peaks are well separated from each other in time. 

Now let's take a wider window:

Even wider:

Note that the peaks are not well separated from each other in time, however, in frequency resolution is much better. Another thing we can infer is that low frequency signals are better resolved when the window function is wider as then we get more frequency resolution and less time resolution while the high frequency signal is better resolved when the window function is narrower as for high frequency i.e fast changing signal a better time resolution is necessary. 


These examples should have illustrated the implicit problem of resolution of the STFT. Anyone who would like to use STFT is faced with this problem of resolution. What kind of a window to use? 

To again make the problem of resolution clear, one cannot know the exact time-frequency representation of a signal, i.e., one cannot know what spectral components exist at what instances of times. What one can know are the time intervals in which certain band of frequencies exist, which is a resolution problem.  The Wavelet transform (WT) solves the dilemma of resolution to a certain extent. I'm not taking details of wavelet transform in this blog though I have already given an idea of it towards the end of my first blog on Neural Networks. 

Digital Filters: The last topic of this discussion. Designing the right filter is very important for any application, and we all realise this. This is again a very huge topic but I'll just summarise it in the following charts. 

It is very good to know everything in depth for designing a filter by various methods (I haven't explained the methods here, just mentioned them above) because that will only allow us to grasp or visualise any problem and get us the right solution to it, though all these methods are just direct functions in any toolbox so you don't need to sit and design these algorithms, all you must know is when and where to use which method. So reading about the filter design frameworks and the different methods for both FIR and IIR from any standard DSP text book would be helpful now. 

With this I wrap up the topic and I hope you were able to follow me throughout this blog and the previous blog. :)
  
  
  


Fundamentals of Image Processing demystified!

Image Processing does sound so cool, isn't it!? But are your signal processing fundamentals strong enough? After all image is also a 2D signal. Image processing can be implemented using various platforms like MATLAB, Python etc, but your expertise doesn't lie in that, what matters is your understanding of the subjects and the clarity of fundamentals which helps a very simple algorithm to be build for any complex problem. So, here I'm not really concerned with explaining what image processing is but I'll make sure I discuss certain concepts that will make learning image processing easy.
Certain key terms that you should be thorough with are:

  • Frequency
  • Signal
  • System
  • Convolution
  • Correlation
  • Transform
  • Digital Filters
Ask yourself what do you understand by all the above terms. Just keep your answer in mind and read through the following explanation to form a better understanding.


Frequency: You are hearing PM Narendra Modi giving a speech and in another channel a cricket commentary. This goes unsaid that you may find the speech to be easier to comprehend with every word being distinctly heard while the commentary to be little difficult to understand. Clearly what makes these two sounds different is their frequencies. Hence, frequency is nothing but the rate of change of a signal. A signal with low frequency contains a lot of information and is said to be slow changing signal while a signal with high frequency might be a little irritating at times to our ears as it is fast changing and so is treated as noise. Same goes when saying babies sleep listening to lullaby rather than a rap song.
Slow changing signal is thus of interest to us but is there any problem you can think of it? Yes, being slow changing it carries very less energy and so would die out soon without reaching a great distance. Well, this can be tackled by either amplifying the signal or doing a frequency modulation. Frequency modulation is nothing but passing the low frequency signal with a high frequency signal that can take it to a greater distance. Like you take your lunch box and eat the food inside that and not the box, right :P
So, frequency plays an important role because looking at the frequency you can understand if that signal is of interest to you or not.
To give an idea of one application, there is an image with a tumor cell that needs to be detected. So, all throughout the image you may find uniformity or the neighboring pixel values changing by a small amount (slow changing) but when a tumor cell is encountered, because it is captured differently than normal cells around it there is a sudden change in pixel value creating a high frequency region. Hence the algorithm will narrow down to just finding the high frequency regions.

Signal: Everyone knows what a signal is. Going by the technical definition it is just a function of one or more independent variable, independent variable like time you can say. It is important to know the characteristics of the signal in order to process it. And it would have different characteristics in time and frequency domains. It is also important to know what type of signal is it like casual, LTI or what. Taking me as an example, in my blog site I'll be talking about technical stuffs but if you find me on Instagram or Twitter you can find me talking about various other stuffs like travel photography, social causes etc. So taking this blog site as one domain, you will get only a little idea about me but if you change the domain you can find my more characteristics. Same goes with any signal, you will have to deal with both time and frequency domain to deduce more characteristics in order to understand the signal better for processing.

System: Anything you see is a system be it a mobile phone, a human body or any device. It is not at all important what happens inside the system but what is important is to have an understanding of the system. By saying understanding I mean you should know what a system does when you give a certain input. You'll attend a call by pressing green and reject using red so this is what is understanding, what happens inside is of no concern to us. Also it is very very necessary to know for what inputs the system will undergo instability. Essentially there are three things that you need to bother about any system, they are: Dynamics of the system i.e. the way system changes with time, Transfer function of the system that actually gives a relation between input and output and finally the stability of the system.

All the signal processing is done in the computer. Real life signal being analog in nature needs to be first converted to digital for giving it as input to the computer. Knowing what analog to digital conversion and vice-versa is thus essential. Any analog signal is first sampled or discretised in time. At this step following the sampling theorem is important (fs>2fm) because when you do the reverse to obtain analog from digital if initially sampling wasn't done properly then here you'll lose or overlap the information called aliasing effect. After sampling you know it is quantisation where selecting the no. of quantisation levels is important and is given by 2^n where n is the no. of bits.

Convolution: This is the best part. It is actually the origin of every digital signal processing algorithm. First giving the mathematical understanding:
So, 'x' basically denotes any signal along with noise and 'h' is the impulse response of the system. Filter can be the system, and actually convolution is a filtering process. How do you view it is this way: you want to approach your senior for some guidance but you don't know that person so first you'll talk to others who know him and form a reference response and by taking the reference being impulse it becomes the impulse response. After that you'll approach and finally form a complete response taking your as well as reference into consideration. You being the 'x' input and that senior being the system with the reference response 'h' and finally your output response is a convolution giving 'y' as resultant response. This is an explanation for 1D convolution. Now as we observed we need to do flip, shift, multiply, add all in one cycle which gets tedious so in such situation you change the domain and do a simple multiplication which gives the same result, this is actually a property of convolution.

Correlation: This gives degree of similarity between two signals in the form of percentage. There are two types of correlation: Auto correlation and Cross correlation. Auto correlation gives the similarity between a signal and the same signal after a certain period of time. Therefore allows to predict ahead of time. (note: convolution was back in time) Cross correlation gives degree of similarity between any two signals. Giving few application specific examples: There is a patient under observation whose ECG is taken every hour and compared with previous to draw similarity and predict future response that would help detect any heart problem. Speaker recognition also is based on this concept, where my speech is compared with my own saved speech and similarity is drawn. Same way in bio-metrics, degree of similarity between the fingerprints are found, this threshold is set like in highly secure places the degree of similarity required might be 90% and so on.

There are two more concepts, Transforms and Digital Filters that I'm yet to cover which I'm keeping for the next blog as I don't want to unnecessarily make this blog very long. If you want to dig more into these concepts sure do, there is hell lot to explore. Everything I have written can be explained by maths and formulas which I have avoided here.



My first Code-along workshop

I had one of the most satisfying Saturday last weekend and that feeling is the reason I'm writing a blog after almost a year. I often...