IIT TECH AMBIT NOVEMBER-DECEMBER ISSUE

Page 1




06

To reach out with feedback, collaborations, submissions, or just to chat with us about the tech we love to talk about! Email: connect@iit-techambit.in Stay up to date with our latest stories on Science, Technology and Business stories from the Pan IIT ecosystem and India, by following our social media: https://www.facebook.com/iit.tech.ambit https://www.instagram.com/iit_techambit/ https://iit-techambit.in/

15


34 28 10 23 40


STARTUP

IIT-TECH AMBIT

Source: Unsplash 6


STARTUP

IIT-TECH AMBIT

E-Tex :

the smart textile venture A startup of the Textile department of IIT Delhi has developed smart textiles that have therapeutic, monitory and performance enhancing uses.

Written by GOPIKA ARORA Designed by SHARVARI SRIRAM

A

fter smartphones, smart TVs and smartwatches; we now have smart textiles! E-TEX is an IIT Delhi based start-up that firmly believes that the need of the hour is textiles that can go beyond their primitive function of covering the body. We need materials that are just as smart as the rest of our devices and have therapeutic, monitory and performance-enhancing uses. Founded by two textile graduates- Anand Yadav and Harsh Gupta, along with Professor Bipin Kumar; the start-up already has quite a few innovative products to their name- Antiviral clothing, FABrelax, heating garments and health monitoring t-shirt, to name a few. They also have an entire range of inexpensive, protective masks and COVID special kits under their KAWACH brand.

7


STARTUP

IIT-TECH AMBIT

Under their Antiviral line of clothing, they have t-shirts, shirts, aprons, socks, gloves and masks- the entire collection. These provide anti-viral and anti-odour protection and are crafted with material designed using advanced anti-viral technology to inhibit fabric from hosting bacteria and viruses actively. These antimicrobial effects last over time and are significantly effective up to the first thirty washes. These comfortable garments composed of ultra-soft premium cotton; mercerised and bio processed for smoother and softer feel are tested as per AATCC standard and were over 95% efficient.

ANTIVIRAL TSHIRT Source: etex.in

FABrelax marketed as ‘your leg’s best friend’ is another exciting product that promises to massage and heal your body using compression therapy. It consists of a lightweight garment with an integrated circuit and a battery life of up to two hours which can be wrapped around the limb to increase blood circulation. Available in different styles and colours, this exploits the concept of static and dynamic compression. Specialised stockings (made from elastic garment apparel) provide static compression; these slowly stretch out vein walls and improve overall circulation, which helps eliminate swelling. Once put on, these allow for gradual constriction to work its way up the leg from the ankle. The socks or stockings serve as a replicated muscle, adding some pressure to contracting areas of restricted blood circulation in the portion, allowing veins to loosen up and reduce pain in the lower limbs. Alternately, dynamic leg compression is a form that mimics the contraction of natural muscles in your calves. By compressing your feet, it encourages the circulation of blood where it needs the most support. Which, in turn, can again help reduce swelling in your blood by allowing fresh blood, nutrients, and lymphatic fluid to help speed healing and recovery from fatigue, pain, cramps and related issues. Therefore, FABrelax is not only an effective way to improve circulation, flush lactic acid, and increase oxidation of muscle tissue, but it also helps you to get more oxygen throughout your body which in turn results in high performance.

FABRELAX Source: etex.in Source: Unsplash 8


STARTUP

Their soon-to-be-launched list of items consists of the heating garment and health monitoring t-shirt. Targeting frigid areas like Siachen, the heating garment looks and feels like any other jacket but has an entire circuitry integrated inside the fabric which keeps the user warm in icy cold weather. This circuit is detachable, which allows for the garment to be easily washed. Tiny batteries are used to power this circuit, and though they have a long battery life, they can be replaced when required by the user.

IIT-TECH AMBIT

sweat rate and similar parameters which are deemed necessary by the medical community and can’t be accurately determined by your current smartwatches. These look like your regular t-shirts (high on the fashion of course) but come with a detachable circuit so it can be washed easily. As of now, fabrics with washable ‘We are working on technologies that can disrupt the textile sector’ says Anand Yadav, CEO of E-TEX and their comfortable and luxurious textiles with special functionalities seem to be on the brink of doing just that.

HEATING GARMENT Source: etex.in

Health monitoring devices like smartwatches already exist, but why not have a health monitoring t-shirt? A watch can get its information only through the small area it’s in contact with, a t-shirt, however; is something that hugs the body and can monitor much more and with better accuracy. The t-shirts would measure blood oxygen level, temperature, heart rate, respiration rate, deep breathing,

HEALTH MONITORING DEVICE Source: etex.in Source: Unsplash 9


IIT-TECH AMBIT

STARTUP

Source: pexels 10


IIT-TECH AMBIT

STARTUP

Tapestry: IIT Bombay’s Solution to the Global Pandemic Here is a unique pooling method that can be used to screen thousands of COVID 19 patients with up to 100 % accuracy. This innovation from IIT Bombay is currently leading the world in finding economic and efficient ways to perform coronavirus testing. Written by ABIR MEHTA, PRANAV KASAT Designed by RENU SREE PINNINTI

T

his piece is a continuation of the article on the efficient and unique pooling methods used to reduce the number of trials required to test patients for COVID-19, called Tapestry. These innovative pooling methods were conceptualised and tested by Prof Manoj Gopalkrishnan and his team from IIT Bombay in association with NCBS (National Centre for Biological Sciences). These methods would cater to the requirement of quick and efficient testing posed by the huge population of India. In the previous article, we had discussed a few algorithms used to pool and test samples. Initially, the team tried out an adaptive testing method similar to the binary search algorithm where they would split the samples and perform sequential testing. This involves the rejection of pools with negative results and then investigating in the pool with positive samples. But, this procedure proved to be a slow one as multiple tests are performed sequentially and it takes 3-4 hours to perform a single test. Next, they tried a non-adaptive testing method called Combinatorial Orthogonal Matching Pursuit (COMP). Since it’s a non-adaptive method it was possible to get results of all samples in one round of testing. The results were accurate when the ratio of positive samples in a set was less, but as soon as the ratio increased, this algorithm yielded many ‘false positives’.

11


IIT-TECH AMBIT

STARTUP

To tackle the drawbacks of the methods discussed above, the team came up with ‘compressed sensing’ methods which yielded amazing results. In the compressed sensing methods, they could also determine ‘viral loads’ in a sample which indicates the severity of the infection. Now, let us delve into the technicalities of compressed sensing, its uniqueness and significance.

How does it work? To go into detail about compressed sensing, we will have to start looking into the mathematics behind the pooling of samples. Here, we can begin by declaring certain variables used in the problem:

n –

number of patients to be tested

m –

number of tests performed

x –

vector of n elements denoting the estimated viral load of each of the patients. A viral load of 0 indicates no infection. Also, since the number of COVID cases in the country is sparse across the population, we assume x to be a sparse vector (one with many 0s as the entries).

y –

vector of m elements denoting the measured viral load of each pool of samples being tested. y can be thought of as a “given” variable to understand the problem better. The main thrust of the problem is to find the estimated viral load (‘x’) given the measured viral load (‘y’) using m tests.

A –

Binary mixing matrix of dimensions m*n. Aij = 1 implies that the jth sample is used in the ith test. Aij = 0 implies that the jth sample is not present in the ith test. The equation to be solved is y = Ax. || y – Ax ||2 : This function is used to determine how close the actual viral loads are to the calculated viral loads. The closer the error is to zero, the more accurate are our calculated viral loads. Thus, we must have two objectives: 1) To find an x for which Ax is as close as possible to y. 2) Ensure that x is sparse, meaning that it has a large number of 0 entries.

WE CAN SEE THAT N = 60, M = 24. HERE, THE ALGORITHM IS TRYING TO TEST 60 PATIENTS USING ONLY 24 TESTS. THE GREEN SQUARES INDICATE SAMPLES WHICH ARE PART OF WHICH POOL.

12


IIT-TECH AMBIT

STARTUP

Since the number of variables here exceeds the number of equations, we need to apply certain further constraints on the problem to estimate the viral loads. There are a few methods that can be used to ensure the two objectives. Here, we can discuss the most successful ones so far.

Non Negative LASSO: This method uses a cost function which penalises all the solutions which do not perform the above objectives. Thus the goal is to find a vector x which minimises the cost function, given by: Jlasso = || y – Ax ||2 + || x || Here the mod (|| x ||) operator is the sum of the absolute values of x. This part of the cost function penalises all the values of x which have large entries. It also ensures the sparsity of the vector x.

HERE IS A VISUALIZATION OF HOW THE APP WOULD LOOK. LAB TECHNICIANS WOULD BE GIVEN EXPLICIT INSTRUCTIONS REGARDING WHICH SAMPLES TO INCLUDE IN WHICH TEST.

The distance operator (|| y – Ax ||2) is the sum of squares of all the entries in the vector y – Ax. This part ensures that Ax is as close to y as possible. This ensures that the vector x is such that upon pooling the samples, we get results similar to those measured (in vector y).

We keep assigning positive viral loads to elements in the vector x until the error passes below a threshold, . In the next step, all the remaining entries in x are assigned 0, thus ensuring sparsity.

NN LASSO was able to achieve a sensitivity of 99% and a specificity of 96% while being tested on computer-generated samples having one in ten positive cases (which is very similar to the ratio of positive samples in a set in India).

Although computationally expensive, this method has also been able to achieve surprising results. In computer-simulated samples, it showed a sensitivity of 95% and specificity of 98% in cases where one in 10 samples were positive.

Non- Negative Orthogonal Matching:

How are they Different?

NNOMP is a greedy approximation algorithm which tries to solve the optimisation problem given by || y – Ax ||2 < , such that non-zero elements in x are minimum (this is done to impose the sparsity condition). The algorithm targets those samples which are most likely to contain the virus and assigns a viral load to it which would minimise || y – Ax ||2, a measure of the error of our calculated viral loads.

What makes the research done by the Tapestry team unique is the fact that having collaborated with the NCBS, they have been able to test their research on actual COVID samples. Rather than just produce algorithms to improve testing, they produced results in real-life lab situations, proving the value of their work beyond doubt.

13


IIT-TECH AMBIT

STARTUP

To improve the practicality of their work, they have also developed an application to instruct lab staff on how to pool the samples. This ensures that the algorithms which the team have worked so hard on can be used by anyone performing PCR tests in labs. Further, the method suggested by Tapestry is non-adaptive. This means that repetitive tests are not required, rather, all the pools are tested once. Thus, they are able to achieve quicker results compared to usual pooling methods.

Keep an eye out for Team Tapestry Currently, the team is working on various initiatives to test their pooling technique. From participating in international competitions such as Xprize, to securing government funding, team Tapestry is on its way to achieving fame on a global platform. They are also working on methods to make the entire testing process for COVID 19 completely automated. If successful, an automated COVID test would require no personnel, making it significantly safer and more efficient compared to the current testing methods. In an overpopulated country with limited resources, implementation of compressed sensing methods would prove to be a boon. The testing rates would increase manifold, and it would be quite beneficial for us. Increased testing would mean more identification of positive cases. Thus, we can provide timely treatment to the affected people as well as isolate them to prevent the spread of the virus. To make this possible, the team is striving continuously to reach a near-perfect accuracy by trying out more such methods. Team Tapestry is at the cusp of achieving greatness at several platforms, both nationally and internationally. Their success would ensure that limited testing rates would no longer serve as a hindrance to monitor the COVID spread in our country.

Source: pexels 14


RESEARCH

IIT-TECH AMBIT

Reconnaissance :

Identifying disruptions in power quality Read on to find out how researchers from IIT Indore have used signal processing and soft computing techniques for increasing the accuracy of classification of power quality disturbances.

Written by DANYAL SHAHID SHAMSI Designed by NIDAMANURI CHANIKYA GUPTA

o

ne fine day, the equipment malfunctioned…It’s a normal wor king day, and the machines in your factory seem to be working just fine. Suddenly, something goes wrong at the local distribution station, and the equipment screeches down to a halt. Work comes to a standstill. You call in the engineers to figure out what went wrong and fix it. This incident leads to a serious downtime for your factory and you incur losses. It also brings in additional costs of calling in engineers to diagnose and fix the problem with the equipment, or worse, replace them. Clearly, this isn’t a situation you as an industrialist would like to find yourself in. Power fluctuations lead to really serious losses for industries. Here are some figures from various studies that were conducted to assess the impact of power quality disturbances. In Italy, a survey of the economic impact of power quality fluctuations revealed that around 50,000-250,000 USD are lost annually for each factory. Voltage sags account for annual losses of around 200 million USD in South Africa. A Europe-wide survey revealed that industries would save around 10% of their annual turnover, if power quality distributions didn’t occur. If we consider specific industries, the textile industry loses 15% of its annual net profit because of power quality disturbances. In the automotive industry , even a downtime of 72 minutes would amount to a loss of 7 million US dollars…

15


RESEARCH

IIT-TECH AMBIT

We can go on and on, but the above figures suffice to show that reliability of power supply is an important concern for industries. A simple voltage sag might lead to equipment malfunction, and thus to serious downtime. On a good day, it would just lead to increased losses in devices, thus reducing the efficiency of equipment. But, even that is a serious issue when we are talking about costs. Industries aren’t the only stakeholders when it comes to reliability of power supply. Hospitals rely on sensitive medical machinery, a malfunction in which could affect a critical operation, thus endangering several lives. Even for domestic consumers, reliability of power supply is a serious issue. Imagine your weeks’ worth of work, which you didn’t save, going down the drain as your computer reboots in response to a glitch in the local feeder. With more and more non-linear equipment getting added to the grid, for instance, converter-driven equipment, notably motor drives and consumer electronics, there is an increased occurrence of excessive harmonics getting introduced into the system. Why this happens is pretty easy to understand. Converter driven equipment often produce non-sinusoidal periodic signals, which are in principle, composed of an infinite number of sinusoidal signals (Fourier’s theorem). These devices, using rectifiers and other converters produce multiples of the fundamental frequency (50 Hz, or 60 Hz, depending on the utility). Excessive harmonics in the system can lead to overheating of transformers, produce ‘electromagnetic noise’ that can interfere with sensitive electronic equipment,

THE GRAPH IN BLACK SHOWS A NON-LINEAR VOLTAGE SIGNAL HAVING FREQUENCY SAME AS THAT OF THE UTILITY. IT CAN BE REPRESENTED AS A SUM OF THE FUNDAMENTAL MODE, AND ODD MULTIPLES OF THE FUNDAMENTAL MODE. Source :Graphs

16

and reduce the output of several devices. Smart Grids are emerging as an alternative to conventional grids. They would need to meet the increasing need for reliable, efficient and high-quality power. This though, is made difficult because of integration of renewable energy sources, storage units, controllable loads, with the grid, which add to the power quality disturbances present in the system. To meet the goals, one needs devices to mitigate the effects of power quality fluctuation. And for that, one needs to properly identify the types of power quality disturbances, and the sources thereof. This necessitates for the operators to continuously monitor and identify these disturbances. The popular methods include application of signal processing techniques to extract features from the voltage signal, which are later used to classify the disturbance by using soft computing techniques. Since knowing one’s nemesis is the first step towards countering them, let’s move ahead and classify power quality disturbances into convenient categories.

Identifying the nemeses… Power quality can be described as the grid’s ability to supply clean, reliable power, free from distortions. The voltage and current waveforms for all appliances should remain (nearly) sinusoidal. To characterise the ‘quality’ of power supplied, we look into the symmetry, frequency, amplitude of the waveforms.

The above definition provides a way to characterise power quality ‘disturbances’

CLEAN AND RELIABLE POWER SUPPLY IS IMPORTANT, AND SO WE NEED TO CATEGORISE POWER QUALITY DISTURBANCES. Source: Unsplash


RESEARCH

IIT-TECH AMBIT

Voltage amplitude varition: On an average, the voltage amplitude values are equal to their average specified values, but they are never equal. The variation in voltage during a day can be specified using a distribution function. The steeper the distribution is about the mean, the better is the quality of the voltage supplied. Voltage magnitude variations can be caused by a variation of load in a section of the grid.

Current amplitude variation: As stated earlier, current magnitude variations in a section of the grid can lead to voltage amplitude variations. As is evident from the name, the current amplitude values change throughout the day. The design of the power distribution system depends on the variation in the current amplitude within the system.

Voltage frequency variation: A difference in the generated power and load, causes frequency imbalances. Frequency transients introduced due to short circuits in a system are also included in this category. The frequency is never constant, but varies around the mean value.

Voltage and current imbalance: Three phase systems need the voltage supplied in the three phases, as well as the phase between them to be equal. Any deviation from the above criteria falls under this category. This is caused mainly due to load imbalances, which refer to an asymmetric distribution of load among the three phases. This is a serious concern for three phase loads. This leads to excessive heating of wires in induction and synchronous machines, thus reducing their efficiency.

Voltage fluctuation: Voltage fluctuation, or voltage flicker is defined as high frequency (which in the present

Source: Unsplash

17


RESEARCH

IIT-TECH AMBIT

context would mean any variation in the span of a few seconds) voltage variation. This leads to a reduction in device performance, but isn’t directly noticeable, unless we are talking about tube lights/bulbs. For instance, if the frequency of the fluctuation is between 1 Hz to 10 Hz, it is directly noticeable by the human eye and can be quite irritating. The sensitivity of human eyes to high frequency fluctuations, explains the interest in this phenomenon, as well as its naming as ‘voltage flicker’.

Harmonic current voltage distortions:

and

Presence of non-linear loads in the system produces harmonics, which are multiples of the fundamental frequency. These are a serious concern, because they lead to excessive heating and thus losses in the appliances.

Interharmonics: Several appliances like arc furnaces, and physical phenomena like changes in Earth’s magnetic field following a solar flare, introduce frequency components that aren’t multiples of the fundamental frequency. They can damage the appliances operating on the grid, and thus present a serious concern while dealing with power quality disturbances.

Voltage notching:

variation, voltage fluctuation, interharmonics persist for longer intervals. Clearly, simple frequency domain analysis is insufficient for characterisation and classification of the disturbances, since the disturbances can span for varying periods.

Looking at time and frequency simultaneously… We are all familiar with the ‘Fourier transform’. It provides us with an elegant way to decompose a signal into harmonics, and find relative strength of different harmonics in a signal. While it helps in analysing how the signal appears in the frequency domain, it fails to identify the temporal location of a disturbance ‘event’. Also, it isn’t useful for short bursts of signals, which don’t span over a large duration. Since the above two phenomena are one of the power quality disturbances that we encounter, a transform that enables us to look at time and frequency simultaneously is required. An alternative to the Fourier Transform is the Short time Fourier transform. Here, we fix a window length, and take the Fourier transform of the signal in that window. Thereafter, we shift the window to different locations to obtain the spectra at different times. We would obtain a three-dimensional graph as a consequence, telling us about the temporal as well as spectral information contained in the signal.

Use of appliances such as rectifiers can cause a drop in supply voltage during the commutation interval, that is the period when the current shifts from one thyristor in the rectifier to the next. In ideal circuits, the shift of current from one thyristor to the next is instantaneous. But, inductance in the supply, doesn’t allow for an instantaneous change in the value of current flowing through a thyristor. This leads to voltage notching. This is a periodic phenomena, that leads to high frequency components in the voltage waveform.

The short time Fourier transform has one limitation in its window size. The width of the window wherein you find the Fourier transform of the signal is fixed. Thus, with narrow windows, you can’t capture low frequencies (since a waveform with low frequency would take a longer time span to complete one period), and thus, low frequency differences. The frequency resolution with narrow windows is low. At the same time, large windows would make it impossible for us to detect ‘events’ that don’t last for more than a few milliseconds, thus reducing the time resolution.

Now several of these disturbances, for instance, voltage notching can either occur for short durations of time, while others like frequency

An alternative to the STFT is presented by wavelet transforms. Instead of using a window, we use wavelets with different scales. A wavelet

18


RESEARCH

IIT-TECH AMBIT

is an oscillation that persists for a short period of time. It begins at zero amplitude, increases in amplitude to a certain limit, and then falls off ’. A wavelet can be scaled, as well as shifted in time. This thought forms the basis of wavelet transforms. A wavelet with a larger scale (one having a larger frequency of oscillations) would be able to capture a higher frequency resolution. A wavelet with a smaller scale, on the other hand would be able to capture a higher time resolution. Thus, the shortcomings of the STFT are done away with.

THE ORANGE CURVE SHOWS THE REGION FOR WHICH WE WILL BE OBTAINING THE FOURIER TRANSFORM IN STFT. THIS WOULD HELP US OBTAIN TEMPORAL AS WELL FREQUENCY INFORMATION CONTAINED IN THE SIGNAL. Source: Graphs

There are discrete versions of the wavelet transform that are implemented by passing the signal through low pass and high pass filters to separate out different frequency constituents. This is followed by down sampling (reducing the sample size of the signal) to half the previous value. Wavelet transforms are ubiquitous in signal processing. They are used for high-fidelity signal compressions, signal denoising, and detection of discontinuities in the signal. A modified version of the discrete wavelet transform, called Tunable Q wavelet transform was suggested by Ivan W Selesnick. It can be used to extract different features from a voltage signal, which can later be used to classify the power quality disturbance in the signal. The Quality factor of the filters can be adjusted using this method, depending upon the oscillatory behaviour of the signal being fed in.

CONSIDER THIS GRAPH, FOR INSTANCE. WE CAN SEE THAT IT CAPTURES THE DISTURBANCE AT T=0 WITH A DECENT ACCURACY, SINCE THE WINDOW (ORANGE) IS OF A SMALLER SIZE. BUT SINCE THE SIGNAL (BLUE) ISN’T ABLE TO COMPLETE ONE PERIOD IN THE WINDOW, IT’S FREQUENCY ISN’T REGISTERED CORRECTLY. Source: Graphs

NOW, CONSIDER THIS CASE WHERE THE WINDOW SIZE (ORANGE) IS LARGE ENOUGH FOR THE SIGNAL TO COMPLETE ONE PERIOD IN THE BOX. THUS, IT’S FREQUENCY IS REGISTERED CORRECTLY. BUT, AT THE SAME TIME, WE CAN’T PRECISELY LOCATE THE TIME INSTANT OF THE DISTURBANCE. Source: Graphs

THE WAVELET WITH A HIGHER FREQUENCY OF OSCILLATION CAN CAPTURE A HIGHER FREQUENCY RESOLUTION Source: Graphs

A WAVELET Source: Graphs

19


RESEARCH

IIT-TECH AMBIT

You know some, you learn some… Given the problem of making a system to classify something into a given class, you have two different ways you could go about doing it. 1.Hardcode the rules in the form of domain knowledge, and feed them to the system. The system would use the rules to perform classification. 2.Collect a set of examples which have already been classified, and use that knowledge to ‘teach’ the system how to perform classification. In this case, the system learns the rules on its own. Now both of these approaches have some upsides and downsides. For the former, there is an implicit assumption that the knowledge about the process is complete and correct. But for real world examples, achieving either or both among completeness and correctness is a difficult task. Even when the rules are specified to get a desired level of accuracy in performing the task at hand, they can become really complex and branched. Thousands of interrelated, or worse, recursive rules can result, keeping track of which could become aherculean task. Also, for any change in input data, we might need to modify the system, which for a system having several complex linkages might turn out to be an exercise in masochism.

rules on its own, but needs a lot of examples to attain a desired level of accuracy. Also, making the data more and more general is a difficult task, without which, certain biases might creep into the system. Artificial neural networks are one among the many paradigms under the empirical learning approach. They, in addition to the problems mentioned ahead, require long training periods and a problem-specific topology. Hybrid learning systems, bring the best from both approaches together. They use the fact that the shortcomings in one approach is complemented by the latter. For instance, we can begin with simpler rules and then allow the empirical learning to take over and learn more complex rules, building over the ones given beforehand. At the same time, the training times are lowered since we don’t have to begin from scratch. This is the motivation behind using Knowledge based Artificial neural networks (KBNN) for the task of classifying the power quality disturbance.

We can turn to empirical learning approaches to avoid the issues with the first approach. What do we observe then? We need huge amounts of data. The system can learn the

A KNOWLEDGE BASED ARTIFICIAL NEURAL NETWORK (KBANN) CAN BE BUILT BY A CONNECTIONIST NETWORK THAT SPECIFIES THE TOPOLOGY OF THE NETWORK. WE TRAIN IT OVER THE EXAMPLES TO OBTAIN THE OBTAIN THE FINAL HYBRID NETWORK. AN EXAMPLE OF AN ARTIFICIAL NURAL NETWORK. HERE THE NUMBER OF LAYERS,NODES IN EACH LAYER, ARE HYPERPARAMETERS THAT NEED TO TURN TO PROVIDE A GOOD LEARNING ACCURACY

20


RESEARCH

What have the other researchers done? Diagnosis and characterisation of signals requires two steps. First, preliminary analysis and feature extraction from the signal that would be fed into the classifier model. Second, training the classifier model, and validation post the completion of training. For the former, methods like Fast Fourier transform (FFT), discrete wavelet transform (DWT), Wavelet packet transform (WPT), Empirical mode decomposition (EMD), Kalman filter and Stockwell transforms (ST) have been applied, and the results have been published. FFT, as we mentioned earlier, can’t give temporal information about the signal, which might be crucial for locating transient power quality disturbances. WPT and DWT are good options, but they suffer from the problem of pre-defined filter design (non-adjustable) and thus the extracted components cover a wide band of frequencies. They can’t distinguish between some disturbance categories if the signal has a low SNR. EMT and ST both suffer from the problem of mode-mixing, and thus aren’t great options either.

The Tunable Q- wavelet transform described in the previous section, can be used for extraction of the fundamental frequency component, since the filter design can be adjusted for extracting the fundamental frequency component, and other higher order harmonics from the distorted signal. For the classification problem, several approaches have been suggested and tried. These include decision trees, rule based expert systems, fuzzy logic, K nearest neighbours, support vector machines, and artificial neural networks. Decision trees and rule based expert systems assign equal weightage to all the features in classification. This can increase the possibility of an error in classification. An Artificial Neural Network is an attractive option, because of the flexibility offered in terms of weights assigned to different features. But as stated in a previous section, training and deciding the structure of the network is a laborious exercise, and requires a huge dataset.

IIT-TECH AMBIT

A novel approach… In their paper in 2018, Dr Trapti Jain, Dr Amod Umarikar, associate professors at IIT Indore, Karthik Thirumala, a research scholar at IIT Indore, and M Siva Kumar, an undergraduate student at IIT Indore, applied TQWT (Tunable Q Wavelet Transform) for preliminary analysis and feature extraction from the distorted signal. They used a dual multi-class SVM classifier for classifying the disturbance from one among 14 classes, as under: C1 – voltage fluctuation, C2 –voltage fluctuation with harmonics, C3 – voltage fluctuation with transient, C4 – voltage interruption, C5 - sag, C6 swell,C7 - harmonics, C8 - oscillatory transients, C9 - notch, C10 -spike, C11 - sag with harmonics, C12 - swell with harmonics, C13 - sag with transient and C14 - swell with transient. While previous approaches with SVM classifier had required more than 5 extracted features for classifying less than ten disturbance categories. By using a dual Multi class SVM (MSVM) classifier, they were able to obtain decent accuracies with just 5 features, and classify the disturbance into one of the above 14 classes. The MSVM classifier uses a lesser number of binary SVMs, thus leading to a faster classification. Also, the issues with other approaches, as explained before are averted. They obtained an average accuracy of 98.78% with noiseless signals and 96.42% with a noisy signal. In a later paper, in 2019, Dr Amod Umarikar and Dr Trapti Jain, along with Karthik Thirumala and Harshal Jamode, from NIT Tiruchirappalli used a wavelet packet transform for extraction of the fundamental component from the distorted signal. Using seven features, they used a knowledge based neural network (KBNN), a hybrid learning system, for classifying the power quality disturbances into six classes. The KBNN used a rule based connectionist approach, which decided the no of layers of the neural network,

21


RESEARCH

IIT-TECH AMBIT

and the no of nodes in the input and output layers. The rules were defined based on available literature. After that, they trained the neural network with a backpropagation algorithm to set the weights and biases for the hidden layers. Using the final neural network as a classifier, they were able to obtain 99.33% accuracy for the classification into six classes, as under:

C1- Normal voltage, C2- Voltage interruption, C3 - Voltage Sag, C4Voltage Swell, C5 - Harmonics C6Oscillatory transients. Their next paper, in 2019, brought together the application of TQWT for preliminary analysis of the signal and feature extraction, and knowledge based neural network (KBNN) for the classification. Seven extracted features were used for classification of the distorted signal into nine categories. The procedure for building the preliminary connectionist network, and training the network on a dataset, were the same as explained in the previous paragraph. They were able to obtain a classification accuracy of 98.33% for classification into nine classes, as under: C1- Normal voltage, C2- Voltage interruption, C3 - Voltage Sag, C4Voltage Swell, C5 - Harmonics C6Oscillatory transients, C7-Voltage fluctuation, C8-Notch and C9-Spike

Conclusion A perfunctory glance might make the above seem like a purely academic exercise. But, the classification of the power quality disturbances serves a practical purpose. First of all, it helps in quantification of power quality. Next, for developing appropriate mitigating devices for reducing the effect of these distortions, one needs an understanding of their types and sources. There lies the importance of classifying the disturbance into various categories and finding ways to do the categorisation more accurately. After all, countering one’s nemesis requires one to know their modus operandi, and that’s what researchers working in this arena have been trying to do.

Source: Unsplash

22


RESEARCH

IIT-TECH AMBIT

Computational Material Science This article is based on the discussion I had with Dr. Satyesh K. Yadav, Assistant Professor at the Department of Metallurgical and Materials Engineering, Indian Institute of Technology Madras, Tamil Nadu.

Edited by SUFYAN M. SHAIKH Designed by AVANTI HARGUDE

Background:

F

rom Nokia-1100 to iPhone 12 Pro Max, from Tata Nano to Buggatti La Voiture Noire, and from drinking glass at home to the window glass of space shuttle, we have come a long way in developing materials for a variety of applications. Stopping a Ferrari from 300 km/hr to a standstill or landing an Airbus A380 safely, materials development demands innovation across industries. The ever increasing demand for efficient power generation and environment-friendly air travel has kept materials scientists on their toes for decades, if not for centuries. Computational Materials Science is one such field that encompasses the entire materials development cycle.

23


RESEARCH

IIT-TECH AMBIT

Multi-scale modeling is the primary area of Computational Materials Science (CMS)/ Integrated Computational Materials Engineering (ICME), where the materials are modeled/simulated/tested right from their electrons to the bulk component spanning meters. CMS/ICME techniques at various length scales are:

1. Densiy Functional Theory (DFT) - Electrons, Atoms Picometers

Material development domains. All the above need to happen at a faster pace to get a competitive advantage.

Rapid process development, quick micro-structural analysis, faster property evaluation, improved performance; all of it at a reduced cost. That’s the ultimate aim of Computational Materials Science (CMS)/Integrated Computational Materials Engineering (ICME). The revolution in computers has led to faster materials discovery and quicker alloy development cycles. Quicker and cheaper computing resources have made it possible to run an entire experiment on desktop computers instead of running them on expensive equipment.

Atomistic modeling technique based on fundamental Schrödinger’s Quantum Mechanics equation. The ultimate technique to model/simulate, atoms/electrons free from any assumptions. Requires expensive supercomputers to run the simulations. Can’t handle more than a few hundred atoms. Works at atomic length scales.

2. Molecular Dynamics (MD) Atoms, Molecules - Angstrom, Nanometers Another atomistic modeling technique based on Newton’s Laws of Motion. Requires inputs from DFT. Can smoothly run on desktop computers. Can handle a few million atoms. Works at molecular length scales.

Current State

3. Calculations of Phase Diagram (CALPHAD) - Materials stability Microns

From simulating materials’ electronic properties using ab-initio Density Functional Theory techniques to testing the entire component in a Universal Testing Machine, CMS/ICME covers materials development at all length scales.

Bulk materials modeling technique based on Laws of Thermodynamics and Kinetics. Calculates relative stability of various materials’ phases at equilibrium. Can smoothly run on personal computers. Works at the micron/ microstructure level.

24


RESEARCH

IIT-TECH AMBIT

4. Phase Field - Grains, Interfaces Microns Similar to CALPHAD, but takes into account the local interactions at different interphases as well. Requires workstations. Works at micron/ microstructure level.

5. Finite Element Methods Component Level - Meters Bulk materials modeling technique based on mechanical and thermal properties of materials. Requires workstations. Works at component level (in millimeters) and also at the bulk structure level (in meters).

Overview of Multi-scale modelling methods covering various length scales. (Final component photo by Emiel Molenaar on Unsplash)

Future Outlook With the revolution in data sciences, the Computational Materials Science field has also been positively affected. Faster computers and cheaper computational resources combined with developments in Data science, Artificial-Intelligence/Machine-Learning approaches, are further reducing the time-tomarket. Now we understand our components/ materials in a much better way than we used to a decade ago. Right now, it is almost impossible to model even a mole of atoms in the abinitio Density Functional Theory; that’s 10 followed by 23 zeroes. But a kilogram of iron has more than 17 moles of iron atoms! That means right now we cannot even model a gram of iron using the fastest supercomputer available in the world! Another example is of Rolls-Royce, which generates three petabytes of data every year

from its jet-engine fan blade manufacturing alone! [1] [1] This is equivalent of a stack of CDs 3 miles high! A thorough analysis of this much data can lead to major improvement in fuel efficiencies and cost savings. In the future (which is almost here), millions of alloy compositions will be studied using complex AI/ML models. These models filter the alloys based on experimentally reported data and then fed them to Physics and Thermodynamics based simulations, which further filters down the alloy compositions to be synthesized and tested. We will be seeing more and more Physics being taken into account for higher and higher length scales in these computational techniques. The innovations in AI/ML techniques will lead to decreased human interference in the alloy development cycles, which act as a bottleneck right now.

25


RESEARCH

IIT-TECH AMBIT

Computational Materials Science for new alloydevelopment. Feedback at every stage leads to quicker alloy development cycle.

Figure above explains the alloy development through CMS/ICME approach. We expect the entire computational materials science field to get automated in the future to the point that once the design parameters are set, the machines can do the alloy searching to synthesis to the testing, themselves. Quicker feedback from simulations-synthesis-testing stages, across the length scales will lead to faster development of new materials. As rightly put by Dr. Satyesh K. Yadav from Indian Institute of Technology Madras: “We don’t even have a storage capacity, forget about processing a mole of atoms right now. By any standard Computational Materials Science (CMS) and Integrated Computational Materials Engineering (ICME) will play a crucial role in the future of materials development. As the capabilities of each of the materials modelling techniques increase over larger length

26

and time scales, they will start overlapping more and more. The gap between atomistic modeling techniques like Density Functional Theory, and CALPHAD techniques would start fading. The fidelity of ICME would improve, and we will be able to accommodate more and more physics to larger length scales.” — Dr. Satyesh K. Yadav Assistant Professor, Department of Metallurgical and Materials EngineeringIndian Institute of Technology Madras, Tamil Nadu, India.


27

Source: unsplash.com


RESEARCH

IIT-TECH AMBIT

28


RESEARCH

IIT-TECH AMBIT

Transforming AgricultureRight from the Roots

A novel approach for improving the agricultural industry by tapping into plant-microbiome interactions explored in Dr. Shilpi Sharma’s lab at IIT Delhi.

Written by DHWANI TECKCHANDANI Designed by SHARVARI SRIRAM

L

iving in the era of a pandemic, it is often easy to associate microorganisms with villainous creatures that threaten civilisation. This probes us to question what Marvel has fed us all these years, does good indeed hold over evil? Keeping age old philosophical questions aside, the harmful impact of these organisms has easily overshadowed how dependent the entire ecosystem is on them. While we do study the vital organs of our body like liver, stomach or kidney, one can argue that your microbiota is an entire underplayed organ in itself, one you can’t survive without. Our bodies are home to a plethora of microorganisms like bacteria, fungi, archaea and even viruses, this is known as the microbiota. Like a fingerprint, your microbiota is a signature of your body, only, it provides the opportunity to share and exchange. While a compromised microbiome may be reflected in one’s poor health, a healthy human being’s microbiome is one to learn from since it holds potential to greatly improve each one of our lives. It is only natural to assume that microbiomes are just as essential to other life forms as they are to humans. Thus, one wonders about plant-microbe interactions and what we see is overwhelming. Plants are dependent on their microbiome for growth, health, nutrient uptake, stress resilience and resistance to pathogens. This, until recently, was untapped territory. The potential of harnessing the plant microbiome could revolutionise agriculture as we see it now.

29

Source: Unsplash


RESEARCH

IIT-TECH AMBIT

Why should we stress on plant microbe interactions in the first place? It is estimated that we would have to double our food production as the population of the world grows to approximately 9.1 billion by 2050. Why this task is so challenging is because we have maxed out on using our external resources efficiently. We have successfully understood how to maximise the number of plants we can grow in a given amount of land. We have continually relied on fertilisers and other chemicals for nutrients essential to crops, so much so that there isn’t much scope of increasing their concentration in the soil anymore as they have already been overused. Over the last few years, the agricultural industry has witnessed reducing yield gains. With poor agricultural practices, we have already destroyed the microbiota significantly. In light of economic instability of the world, climate change and maxed out external inputs, we need a newer approach to improve agricultural yields. This is where cultivating the plant microbiome comes into the picture. It not only provides the possibility of increased yields

30

but also a decreased requirement for chemicals like pesticides, insecticides and fertilisers. This is what Dr. Shilpi Sharma and her lab at IIT Delhi work on. Dr. Sharma identifies herself as an environmental microbiologist who sees immense potential in microorganisms not only to change the face of agriculture but as model organisms from which we have a lot to learn. Her lab works on sustainably using this biotic component of the soil as efficiently as possible. Their work is mainly based on the organisms living in the rhizosphere which is the narrow region of the soil referred to as the microbial

FIG 1: PLANT GROWN WITH MYCORRHIZAE (RIGHT) AND ONE WITHOUT (LEFT)


RESEARCH

IIT-TECH AMBIT

storehouse. It is an extremely rich region with respect to its microbiota due to plant roots making strong associations with the soil microorganisms providing them with an ecosystem to thrive in. To bring in perspective how drastic a change an improved microbiota can have, here is a picture of a plant grown with mycorrhizae (right) and one without (left). Mycorrhizae is a fungus that grows on plant roots in a mutual symbiotic relationship with them. It helps in nutrient uptake and improving the soil biology. It can comprise upto 80% of the root system! It helps the plant tap into depths and crevices the roots were incapable of reaching themselves. The hyphal network that ramifies into the soil can be more than 100 meters in a single cc of soil! It is evident from figure 1 how drastic the improvement in the root network is. Among various other advantages, one major selling point is that mycorrhizae is not very species specific and can be used to enhance growth of most crops. Rhizobacteria, as the name suggests, is root associated bacteria living in the rhizosphere. They promote plant growth and enhance

31

Source: Unsplash


RESEARCH

IIT-TECH AMBIT

recycling of plant nutrients. There is extreme variation in rhizobacteria and their effects are quite species specific. Thus, a lot of thought and research needs to go into understanding exactly what kind of microbial composition of the soil is needed to enhance yield which could vary from species to species, consequently requiring a lot more work. Bacteriophage, another class of microorganisms, has proved to be essential for prevention of infection from bacteria. These are viruses that infect bacteria and not the host organism, protecting it from pathogenic harm. What is remarkable is that they have very few off target effects. Although they are highly promising, they are still challenging microbes to work with in agriculture. However, in the light of increasing antimicrobial resistance we need to look for alternatives and this provides potential to do that. Thus, the lab is also involved in the surveillance of antimicrobial resistant bacteria and genes in agrosystems since that is a pressing issue which often leads to great loss of crop yield. Disease suppressive soils, another interesting area of research, are soils that provide protection to crops from soil borne pathogens due to the efficient microbiome that they are composed of. A combination of such techniques has potential to replace antimicrobials used in agriculture which further harm the soil by destroying its microbiome. Bacteriophage, another class of microorganisms, has proved to be essential for prevention of infection from bacteria. These are viruses that infect bacteria and not the host organism, protecting it from pathogenic harm. What is remarkable is that they have very few off target effects. Although they are highly promising, they are still challenging microbes to work with in agriculture. However, in the A

32

light of increasing antimicrobial resistance we need to look for alternatives and this provides potential to do that. Thus, the lab is also involved in the surveillance of antimicrobial resistant bacteria and genes in agro-systems since that is a pressing issue which often leads to great loss of crop yield. Disease suppressive soils, another interesting area of research, are soils that provide protection to crops from soil borne pathogens due to the efficient microbiome that they are composed of. A

combination of such techniques has potential to replace antimicrobials used in agriculture which further harm the soil by destroying its microbiome. The lab also works in the assessment of soil health as a result of different agricultural practices as that is a major contributing factor in building the soil microbiome. We need to focus not only on what the better microbiome would be but also on how to efficiently deliver it so that it would thrive in the soil. Especially with climate change, we need to promote plant growth under stressed agro-ecosystems and the biotic component of the soil has a major effect on crops being able to handle environmental stresses. For years we have bred plants for better plants, what we now need to learn is how to breed microbes for better microbes. Fortunately, this task is easier than it seems since plants can identify which strains of the microorganism are more helpful for their own needs and in turn favour that relationship over the others. Thus, the symbiotic relationships that are more fruitful to the plant thrive, giving us information about which microorganisms to breed and how. Unlike plant breeding, we


RESEARCH

do not have enough experience in microbe breeding which is why this ability of plants is the ideal way to learn more about which strains of microorganisms to exactly look for. With advanced sequencing and computational techniques used in the lab, there is a lot of scope of improving our understanding of the soil microbiome and sustainably enhancing agricultural yield.

IIT-TECH AMBIT

With the research going on in Dr. Shilpi Sharma’s lab, the inability to provide any more external input for improving agricultural yield doesn’t seem like a dead end, rather an opportunity and a much needed wake up call to harness what was already present in nature, its diversity in life forms. Who knew, the answer lies with the much overlooked and misunderstood category of organisms living right under our noses (well, roots to be precise).

Source: Unsplash 33


RESEARCH

IIT-TECH AMBIT

Source: https://space-guy-status.blogspot.com 34


RESEARCH

IIT-TECH AMBIT

An Enigmatic Black Hole CollisionGW190521

“A year ago three interferometers on the Earth detected an atypical gravitational wave. After studying this signal for more than a year, in September 2020 the scientists presented a paper stating the mysterious reason of this emission.� Written by PIYUSH KUMAR Designed by SHALMALI SRIRAM

A

bout seventeen billion light-years away, the two very large black holes spiralled into each other, collided and merged. On May 21, 2019, at 8:32 am IST, Laser Interferometer Gravitational-Wave Observatory (LIGO) in the United States and Virgo Interferometer in Italy detected a gravitational-wave signal from the merger of extraordinary whopping pair of black holes orbiting in a binary system. The event has been named GW190521. It is a record-breaking gravitational wave observation that revolutionizes the knowledge about how black holes are formed and provides a new way to study the theory of gravity.

After more than a year studying this atypical new signal, scientists think they know what caused the heftiest black-hole merger seen till date.

What was the GW190521 event? When two very massive and strange black holes of mass 85 and 66 times the mass of the sun spiralled into each other emitted the gravitationalwave signal. It led to the formation of another monstrous black hole of mass 142 times the mass of the Sun. The signal was picked up by

35


RESEARCH

IIT-TECH AMBIT

Advanced LIGO and Advanced Virgo interferometers and the event was dubbed GW190521. The signal emitted has traveled for a distance of 17.2 billion light-years through the expanding Universe which is the most distant gravitational wave signal observed so far by the gravitational wave detectors. In a fraction of a second, the merging black holes released roughly eight times more energy than that contained within our sun’s atoms, all in the form of gravitational waves. The duration of the signal was shorter and the frequency was lower than any other black hole merger observed so far. According to the gravitational-wave theory, the time interval that the signal from a binary black hole merger

spends in the sensitivity band of gravitationalwave detectors is inversely proportional to the total mass of the binary system. In the case of GW190521, this time interval was only about 0.1 seconds and the peak frequency was about 60Hz. From the first day, it was very clear that the detectors at LIGO and Virgo had detected something bigger than usual.

Contributions from Indian Researchers The team of researchers at IIT Bombay and IIT Gandhinagar involved in the gravitationalwave research groups collaborated with LIGO-Virgo scientists and are playing a key role since the first observation of such a massive gravitational-wave. The group was involved in the detection studies of GW190521 event. As the time interval for which sensitivity bands at LIGO and Virgo received the signal was very small, it was substantial to confirm it’s astrophysical nature. The group of Indian scientists was involved in such a study to assess the detection significance of GW190521 in the phase matching signal detection search along with the colleagues of LIGO-Virgo explorers. In addition, the group contributed in assessing the distance reach of various searches in the intermediate-mass black hole parameter space. The research group at IIT Gandhinagar was involved in developing the filter bank used for the detection of the black holes in the third observing run along with LIGOVirgo scientists. “This particular black hole is important because this is the first time that we got very clear evidence of the existence of intermediatemass black holes,” said Archana Pai, professor, physics department, IIT-Bombay.

Frequency v/s time visual of the emitted signal detected by the three detectors. Source: LIGO Scientific Collaboration

36

Over the last three decades, the Indian scientists have established a rich legacy of exploration into space and study of blackholes. In particular, Indian space researchers have contributed to the fundamental algorithms crucial to search for inspiraling binaries in noisy data from multiple detectors,


RESEARCH

IIT-TECH AMBIT

Know about GW190521 Source: www.ligo-india.in

37


RESEARCH

IIT-TECH AMBIT

in computing the theoretical waveforms of gravitational wave signals by solving Einstein’s equations, in separating astrophysical signals from numerous instrumental and environmental artefacts, in the interpretation of joint gravitational-wave and gamma-ray observations, tests of Einstein’s theory and many other aspects of the data analysis.

The mysterious mass black hole

85-solar-

Astrophysical models suggest that black holes with masses between about 65 and 120 times the mass of the Sun cannot be formed by a collapsing star. This mass range is termed as the “mass gap”. This is the reason why astronomers were not expecting the black hole with 85 times the mass of the sun as one in the pair that created GW190521. It falls in the forbidden mass gap which indicates an alternative way of formation of massive black holes in the Universe – possibly by successive collisions between pairs of smaller black holes in a special environment such as a globular cluster with closely spaced many black holes. For the existence of such a massive star, current theory predicts that the compression and heating lead to a runaway explosion called a pair-instability supernova. Such an occurrence destroys the star so completely that the energy ejected from it can’t collapse into a black hole.

Mass Comparison GW190521 with other Ligo-Virgo black hole mergers Source: www.ligo-india.in

38

Strange GW190521 and how it shapes the further researches The black holes in the mass range below 100 times the mass of the Sun are termed as “stellarmass black holes” and the “supermassive black holes” are the one whose masses are above the 100 times the mass of the Sun. Between these two is the enigmatic concept of “intermediatemass black holes” with masses in the range of 100–100,000 times that of the Sun. GW190521 is the first intermediate-mass (100–100,000 times that of the Sun) black hole observed by the LIGO-Virgo detectors. “As per our conventional knowledge of black holes, they either belong to the stellar-mass category formed from the collapse of a single heavy star or supermassive blackholes like those found at the centre of galaxies. This new particular discovery seems to suggest a third kind, the intermediate-mass black hole, which is the first-of-its-kind ever detected in LIGO,” said Anand Sengupta, associate professor of physics at IIT Gandhinagar. Built on the concept of Albert Einstein’s prediction of the existence of gravitationalwaves(in 1916), the LIGO-Virgo Scientific Collaboration: comprising 15 countries including India, published their first paper about the detection of gravitational-waves in February 2016 which revolutionized the concepts of black holes. Now in September 2020, scientists have a new mystery which originated from the collision of two heavy black holes. Some researchers believe it will grant a unique opportunity to test Einstein’s General Theory of Relativity. The discovery will allow us to delve deeper into the merger and ring down part of the signal. Tests were performed to search for extra features of the signal predicted from alternative theories of gravity. This extraordinarily heavy pair of stellar-mass black holes in the binary system has a capability to challenge the understanding of the formation of the black hole.


RESEARCH BUSINESS

IIT-TECH IIT-TECH AMBIT AMBIT

Source: Caltech/R. Hurt (IPAC) 39


RESEARCH

IIT-TECH AMBIT

Source: unsplash 40


RESEARCH

IIT-TECH AMBIT

Staying Rooted:

Where Agriculture meets Tech In the 21st century, the popular concepts of data science, machine learning and information technology have been introduced in numerous applications. Here is one that can revolutionize the agricultural industry, allowing farmers to rake in better yields using fewer resources. Written by ABIR MEHTA and BHAVINI JELOKA Designed by SHALMALI SRIRAM

W

ith over 60-70% of our citizens dependent on agriculture for their livelihood, India still is an agrarian economy, and an innovation in the Agriculture Sector can go a long way in boosting our output as a country. The Agro Informatics Lab, IIT Bombay led by Professor Adinarayana works on one primary goal: finding innovative technological solutions which can be implemented in farms across India. Agro Informatics combines the modern concepts of Data Science, Information Technology and Machine learning with agriculture, to make farming more efficient and profitable. It is difficult for farmers to gauge the health of their crops because they are unable to come up with objective measures to decide whether a crop is healthy or not. Crop health is a crucial factor in determining the output or yield of a crop. This poses a challenge to the agriculture industry, because farmers lack the information to decide which part of their field requires more resources and which requires less. Agro Informatics Lab, IIT Bombay has come up with a solution to this problem. Using image processing, spectroscopy and machine learning, they have come up with a method to gauge the health of a plant

41


RESEARCH

IIT-TECH AMBIT

objectively. Their solution provides farmers with valuable information such as which areas need to be irrigated more thoroughly and which areas need more fertilization. So, let’s have a look at how they do it.

Indo-Japan collaborative critical zone observatory equipped with ground sensors, automatic weather station, ET flux tower and Hyperspectral imaging through drone Source: https://agroinformaticslab.github.io

42

Here’s the team: IIT Bombay is working in collaboration with four other universities, IIT Hyderabad, IIIT Hyderabad, PJTSAU, and the University of Tokyo. Students and professors from these universities are focusing their attention on a research farm in Hyderabad, referred to as a “critical zone”, where they are testing innovative methods which can ensure better yields for farmers. The project has been funded both by DST (Department of Science and Technology, India) and JST (Japan Science and Technology Agency). To get more information about this project, we got in touch with Rahul, a PhD student who is working in the Agro Informatics Lab right now. Now that we know the team, we can dive a bit deeper into their research.


RESEARCH

First, what do we mean by the health of a crop? There are two broad parameters used to determine this - the biophysical and the biochemical properties. The biophysical properties loosely translate to the parameters that can be seen by the naked eye and measured directly. They include the height, biomass, number of flowers and so on. On the other hand, biochemical properties characterise the chemical reactions that occur in the plant. The percentage of water present in the leaves, the nitrogen content and the carbon content are some factors that influence the reactions within the plant and hence come under the term of biochemical properties.

IIT-TECH AMBIT

the word ‘health’. If a human being’s body parameters (Temperature, BMI, etc.) deviate from the ‘normal’ value, we say that the person is unhealthy. While this ‘unhealthiness’ can often be attributed to poor lifestyle or diet in humans, in the context of plants, water stress, fertilizer stress or pests are the general causes.

Naturally, if any of these parameters display anomalies from their desired values, we can categorise the plant as unhealthy. This is very similar to our everyday understanding of

Source: unsplash

43


RESEARCH

IIT-TECH AMBIT

What is the solution? Although the ultimate goal is to increase yield and produce healthy crops, the farmers often find themselves technologically handicapped to make such huge improvements. Moreover, several of them are skeptical about investing large sums of money for the cause as they are not fully aware of its benefits. In a data driven world, the team has smartly come up with a technique to harness the powers of drone based imaging to achieve healthy crops. This technique to achieve the objective of health monitoring relies heavily on data obtained through images of the farm scanned by drones. They make use of the Hexacopter DJI matrix 600, an automatic drone that can fly on a predefined path. These drones have several cameras attached to them, ranging from the daily used RGB cameras to fancy hyperspectral cameras. In fact, these hyperspectral cameras are what gives the team an edge over other people working in the field. These cameras have the capability of not only capturing wavelengths in the visible spectrum but also penetrating through the leaf and finding out what is happening within. The correct wavelength will be able to maximise the properties captured in a given photograph.

Hexacopter DJI matrice 600, an automatic drone having six propellors as the name suggests - used for capturing images Source: https://agroinformaticslab.github.io

Satellite images are often used for health monitoring. These images, unfortunately, have low resolution (30m * 30m) and are generally filled with ‘noise’, or in simpler words, unwanted information. Contrary to this, the drones provide extremely rich images (1cm * 1cm). This helps us give a clear idea of the water and nitrogen content present in the plants. In technical terms, drone-based

44

sensing gives pure pixels and this highresolution mapping system makes it easy to identify the signs, i.e., the biophysical and the biochemical properties. Now that the data, consisting of numerous images have been obtained, the task to draw useful conclusions from it comes into play. Here, the powerful tool of image processing is used. In fact, it is the very heart of the algorithm being developed by Rahul and his team to monitor the health of the crop. Image processing, however, has its own limitations. It can work best with a single frequency band. However, the high-resolution images contain 240 bands per pixel and hence the dimensions must be reduced. Since the team deals with huge amounts of data, the use of machine learning and deep learning techniques are the salient choices of algorithms to reduce the dimension and work with such large amounts of data. These algorithms ensure that most of the features are captured by the image efficiently.

Drone image of the plot at the late vegetative stage Source: https://agroinformaticslab.github.io

The drone takes multiple images of every location of the farm. These processed images are then stitched together to form a complete image of a particular region. This super image is called the Orthomosaic image and consists of 300-400 individual images. This image can then be used to study the biomass (weight of the leaf per unit area) or the leaf area index. This information can then be relayed to farmers along with suggestions on the quantity


RESEARCH

So what does the future look like? According to Rahul the best application of their research would be to make it openly available to Indian farmers. His vision is to distribute this technology at the Gram Panchayat level across India, which would allow the efficient scanning of farms. The data from each farm could then be uploaded to a central drive, where it would be processed using the algorithms developed by the team at Agro Informatics Lab. The valuable information thus extracted would then be disseminated to farmers, to ensure healthier crops, with better yields.

This image is a map of the field which shows the leaf area index (weight of the leaf per unit area), the transition from cool colours to the warm colours signify the increase in the leaf area index. Source: https://agroinformaticslab.github.io

What have they achieved so far? Since Rahul’s thesis completion is nearing its end, most of the models are almost ready. They are currently working on estimating the nitrogen content and are developing the corresponding ML model. The team aims to complete these by December 2020. However, like every research project, this one also has its own challenges. As the project incorporates a lot of on-field work, Rahul tells us, “No one wants to go into the farm and get ground data. The biggest challenge is to get people to visit the farm.” These site-visits are extremely crucial for mapping the farms and visualising what the health of the crops. Most of the people working are inclined towards the data processing aspect of the project and are reluctant to visit the sites where data is collected.

Although the idea of using drones to scan thousands of farms across India may seem slightly far-fetched, we believe this mammoth task is achievable in the not too distant future. The state of Maharashtra has already started using drones for the large scale mapping of village areas. Having found numerous such initiatives taken up by state governments, it is safe to say that the use of drones in rural India is already becoming commonplace. So Rahul’s dream may be achieved sooner than you think. We sincerely hope that further research and ideation in the field of Agro Informatics shall be able to improve the lot of the farmers in our country.

45

Source: unsplash

of fertilizer and water needed, and also the parts of the farm that are perfectly fine.

IIT-TECH AMBIT







Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.