P.I.N.G. Issue 16.1

Page 1


PICT IEEE Newsletter Group (P.I.N.G.) is the annual technical magazine of PICT IEEE Student Branch (PISB) published alongside Credenz. It is a magazine of international repute and has gained wide recognition since its inception in 2004. P.I.N.G. focuses on inculcating a sense of technical awareness amongst its readers with its intriguing articles on present-day technology. It takes pride in having readership among various colleges and professional institutes across the globe, thus enriching fellow students, faculty, and professionals with the recent technologies. For Issue 16.1 we got the opportunity to interact with Mr Navindra Yadav, Founder of Tetration Analytics, Cisco and Mrs Shalaka Verma, Director, Partner Technology at Microsoft. We would like to thank our authors for their contribution to P.I.N.G. 16.1. We would also like to express our gratitude to our seniors for their continuous support and guidance towards making this Issue a grand success. A special mention to our junior team as this Issue is a testament to their persistent and diligent efforts.

Paresh Shrikhande Editor

Rutuja Kawade Editor

Anushka Mali Editor

Kiransingh Pal Designer


Dr Amar Buchade Branch Counsellor Dear All, It gives me immense pleasure to write this message for the new edition of the PICT IEEE Student Branch’s (PISB) P.I.N.G. The Credenz edition of P.I.N.G. is always special for all of us. This year we have an interesting theme ‘Vision’ for Credenz LIVE. During the pandemic period (COVID-19), we conducted all the events online. I’m thankful to all the members and the P.I.N.G. team who took efforts to make these events successful and release P.I.N.G. Issue 16.1. PISB provides a great learning opportunity for all, including its student members, a chance to showcase their talent, views and further strengthen IEEE activities. It is a great pleasure to serve PISB as the Counsellor. Working at the PICT IEEE Student Branch and IEEE Pune Section gives me an interesting, valuable, and great learning experience. I am thankful to all the members of PISB for their active support. Because of their active involvement, PISB won the Best Student Branch Award in 2019, the Darrel Chong Student Activity Award 2020 for Credenz, and two PICT IEEE student members won the IEEE Pune Section Student Member Volunteer Award 2020. I would also like to mention the strong support from Mr R.S. Kothavle, Managing Trustee, SCTR; Mr Swastik Sirsikar, Secretary, SCTR; Dr P.T. Kulkarni, Director, PICT and Dr R. Sreemathy, Principal, PICT. PISB conducts many events every year, and these are appreciated widely by students, acclaimed academicians, and industry professionals alike. The events include IEEE Day, workshops, Special Interest Group (SIG), Credenz, Credenz Tech Dayz (CTD), and National Level coding contest. The team also hosted the very first of its kind of National level competition, Ideathon’20. It provides a platform for students to get insightful knowledge and explore the technological world through these activities. Hence, PISB will continue to help students to involve in their technical interests, and further strengthen IEEE activities. I thank all the authors for their contribution and interest. On behalf of IEEE Rio and IEEE Pune Section, I wish PISB and P.I.N.G. all the success. I congratulate the team for their commendable efforts. Prof. Dr Amar Buchade Branch Counselor, PICT IEEE Student Branch Secretary, IEEE Pune Section


Flashback

nostalgia

Aishwarya Naik, Ex-Editor, P.I.N.G.

I

t was my first year at college. I was participating in our technical festival called Credenz’14 at the time, and I remember some students carrying a bunch of magazines to the auditorium. I caught a glimpse of it and eventually got to know about P.I.N.G. in my second year. I was always interested in writing and technology but what happened in the next two years was a different story, albeit a beautiful one.

Being an Editor for the Issue 12.1 was special for many reasons. It was the first colored P.I.N.G. we had strived to attain since the Newsletter’s inception. That year was also the 50th anniversary of Star Trek, which we had the most fun writing as one of our editorial articles. We also had the chance to interview the renowned Padma Vibhushan Dr Naralikar, Founder of IUCAA. Mostly though, I cherish the team I worked with, the brainstorming sessions we had, and the bond we shared. Working for and then subsequently heading the team taught me a lot about leadership, taking tough decisions, and responsibility for failures. Knowing the reach P.I.N.G. had among the IEEE student chapters over the world, striving to bring something new each time was immensely taxing yet rewarding. I remember the time when our team was creatively exhausted and bordering burnout. Our then PISB counsellor Dr Rajesh Ingle sir called the Credenz team for a meeting and told us about how P.I.N.G. had inspired one of the IEEE Student Chapters abroad to start their technical magazine. That sentence alone was enough to recharge us. Our vigor was back, and we were happy with what we had created together. I believe that amazing things can happen when you let go of your inhibitions towards something new and challenging. Working in P.I.N.G. helped me then, and the lessons I learned have been with me ever since.

I was part of a massive team that ideated, executed, and presented numerous ideas to the seniors. It was the first time that I had been a part of something bigger than myself. Working long hours became the norm because we wanted to do something different together. Coming up with the Call For Articles design, reaching out for articles, meeting exceptional people for interviews, and proofreading till late at dawn resulted in memories that remained forever. I went on to be a part of two Issues: P.I.N.G. 12.1 and 13.0 with such an experience.

Aishwarya Naik is currently working as a User Experience Designer and was an Editor of P.I.N.G. 12.1 and 13.0

Pg 3

ISSUE 16.1

March 2021

CREDENZ.IN


Database vs Database

maven

Is SQL Rising Again?

W

hen the whole world is moving towards No-SQL, there could be an argument about why some of them are still using SQL. To get the preliminary overview out of the way, what can a SQL database be defined as? Sixty years ago, files were used to store data. It continued until the files became unmanageable, promoting the need for software to do the same. That software is now called a database. The structured variant of that software is commonly known as SQL database. The reasons that compelled its use were a fixed schema and a simple query language. SQL made it easier for non-computer scientists to use and interpret data. It served its purpose well to big industries and the world in general.

These databases were built with scalability in mind at the cost of eventual consistency. They scaled to a massive size, as expected. They stored petabytes of data, and their capacity is continuously on the rise. It begs the question, what is making SQL rise again? In the last 4-5 years, big players in the industry are reviving and rethinking SQL databases. They are trying to rectify SQL’s rigid schemas with innovative solutions like dynamic columns, JSON fields, interleaved tables, on-the-fly schema migration, etc. Google, Amazon built SQL scalable databases like Spanner, RedShift, BigQuery is thus bringing back SQL into the arena. Columnar databases in SQL boosted analytics performance many-fold as compared to NoSQL. With seamless integration with BI tools, the performance of columnar databases made big data analytics available at the fingertips of data scientists. The declarative and simple nature of SQL even motivated NoSQL players to follow suit.

Such was the picture before the advent of the 21st century. Along with computing capacity, data too expanded exponentially. In such a world, SQL struggled to keep pace with the ever-growing need for scalability. ACID properties, which were the biggest strength of SQL, became the biggest weakness against scalability.

To summarize, once SQL databases dealt with their previous shortcomings of scalability and customizability, they underwent a new phase of revival. Strong mathematical fundamentals, declarative and simple query nature, performance analytics, and seamless BI integration all culminated in raising SQL to new heights. The last decade marked a turbulent period for SQL, but it emerged out of it stronger and more robust than ever.

Another reason behind the failure of SQL was its lack of customizability. SQL schema was considerably rigid, and it was a tedious task to update it once database sizes grew above a threshold. It also did not provide support for userdefined fields in the database. Migrating schemas took months in some cases, forcing companies to move away from SQL. In short, the problem of scalability and customizability fueled the shift to NoSQL databases like Cassandra, MongoDB, etc. CREDENZ.IN

March 2021

-Prashant Lokhande Software Engineer Google ISSUE 16.1

Pg 4


Bridging the rift

interview

with Mrs Shalaka Verma A passionate global technology sales leader Mrs Shalaka Verma is the Director, Partner Technology at Microsoft. Currently, she leads a team of Global Cloud Architects to drive the development of differentiated industry-aligned solutions on Microsoft Cloud and services around it. She is a passionate Global Technology Sales leader with strong technology research and development leadership, scaled-up startup solutions, Established Performance Engineering and Sizing practice. She was the first Quantum Ambassador for IBM in the ISA region and recipient of Gold Medal from the honorary Prime Minister of India (at BARC 2006). Aside from her professional life, she also holds a keen interest in reading books.

Q

You completed your Bachelor’s in Computer Engineering from PICT in 1999. What were your college days like?

A

Those were the days we will all cherish forever. We didn’t have mobile phones, hence a less virtual world. We had more friend circles, and since we lived in a hostel, the bond between us was amazing. I had friends from my native place, and friends in Pune from other engineering colleges as well. I was not a complete geek during my college days, but I can say that I was decent at studying and had fun along with academics. My social media life on LinkedIn is predominantly my enthusiasm to reach out and give back to people and new talent, nurturing them towards an exciting career.

Q A

What was your motivation behind joining the Bhabha Atomic Research Center?

Mrs. Shalaka Verma, Director, Partner Technology, Microsoft

“First and fundamental thing is your integrity and ability to build trust. I like to call it the courage to stand in you truth” At that time, I heard a lecture by a scientist at BARC. Since then, becoming a scientist was the biggest thing on my radar.

I’ve always realized that if you go with the flow dots in the past, start connecting. I have used software development as a solution. But I’m also a voracious reader. Around that time, I read this book called ‘The Fountainhead’ by Ayn Rand. It was all about individualism, and it had a great impression on me in those years because I had realized that engineering is the time when you get driven by peer pressure a lot. So that book forced me to think about what I stand for.

Q

CREDENZ.IN

ISSUE 16.1

March 2021

You were instrumental in building a supercomputer at BARC that could compute 2025 times faster than other supercomputers at that time. Can you share your experience of working alongside the global scientific community for a large international collaboration at BARC? Pg 5


vLUME, a virtual reality software that allows super-resolution

microscopy data to be visualised, interact with 3D molecular data, exploring complex datasets and analysed in virtual reality. The software can be used to develop treatments for diseases.

A

I was at the start of my career and had many good colleagues who were instrumental in building the ANUPAM supercomputer. It was a phenomenal learning journey, not only in terms of technology and architecture but also getting your way out in a bureaucratic system, figuring out how to build those machines, and having that courage to visualize and execute. I learned many things from my colleagues and seniors at BARC who are veterans in all their paths. The atomic energy field is a multidisciplinary field. Though international exposure came a little later, we built a machine that could perform high performance computing use cases, including microsimulation. After getting 3-4 years of experience at BARC, the Sun laboratories came up with an idea for a light hadron collider project, and it was one of the largest projects we had. At that point, the scientific community realized that the amount of data we produced is humongous, and processing all that data will be hectic. So we came up with a concept called grid computing, and it was my first exposure to the international community. We implemented grids across the world, and it was a baby step in cloud computing that we see today, and that’s how most of these things work.

Q

From BARC, you moved on to Mobileum, which is a mobile analytics company, where you helped the startup establish tech leadership. What was the transition like, from working on performance optimization for a supercomputer to taking up projects in the telecommunications industry?

A

Throughout my career, whenever I’ve changed jobs, I’ve changed the roles. So, it’s never a continuation of something that I’ve done in the past. Mobelium was called Roamware at that time and was a startup. It was a risk because I left everything that I did for 5-6 years in the early stage of my career. One strong advantage I’ve always carried with me is that I learn different things fast, which worked in my favour in Roamware. I picked up a lot of insights into how the mobile ecosystem and the protocols work. CREDENZ.IN

March 2021

I was curious to understand these things because this field was taking shape in India at that time. Mobelium gave me a chance to know more. It was hard, and I had to study a lot to get myself up to speed in understanding that ecosystem, but it gave me a different perspective on how the industry works, how to understand an industrial problem, how to connect with the business that they want to drive, and how to connect backend to technology. And the art of knowing this is something that I’ve always carried after in my whole career. That’s fun because it makes us see the technology coming alive. You see yourself solving a real-world problem.

Q A

You were the first Quantum Ambassador in ISA at IBM. What work did that entail?

I had to put in a lot of effort as I had to learn everything by myself. I got curious about it because I carried some background in atomic physics from my days in BARC. I read a lot on the Universe, Singularity, Event Horizons from Stephen Hawking, Einstein’s Theory of Relativity, etc. The book I read by Stephen Hawking said that he was the first person to pick up Quantum Physics from the small particles. It is so incremental that it made me think about how quantum physics works. When those things were on my reading list, suddenly IBM came up with the Quantum Computer, which they were planning to make available on the cloud in a year or so. I started researching on that, and then an internal IBM program started recruiting volunteers to stretch assignments, which I opted for and got blessed with a supportive coach and mentor. I attended their program, did well there, and got to become a Quantum Ambassador.

Q

You have managed four major markets, ISA, Korea, ASEAN and ANZ, for Technical Sales of Storage and Software-Defined Solutions at IBM. Was there a stark difference in delegating responsibilities for these different markets? Was it challenging?

ISSUE 16.1

Pg 6


Stretching racks, an ingenious device of size as small as few micrometres, lets biologists study the reaction between individual biological cells to physical forces using micro-scaffolds, which are produced by direct laser writing 3D printing process.

A

When I took that role, I knew everything about storage. I never had a challenge, technically. Everything that I learned in those two years is about multicultural navigation. It is some phenomenal learning in terms of people handling, people management, and making crosscultural dialogue and communication work. It entails knowing how to generate the right kind of emotions with your message with the right people, convey that message to different culturally aligned audiences, and understanding what works and motivates what kind of people. Knowledge of what works for the individuals based on the background that they carry, and the country that they live in, the sales culture, the customer maturity in those countries, and the problems addressed with the same technology change the way you sell the same technology in different areas. That geared me up for larger global roles. It also helped me to understand how to establish leadership when you are not able to meet your team every day and that dialogue and trust, by which people will tell you what works when you are empowered to act as an enabler. The biggest takeaway is that I understood that leadership is really about serving the people that you lead. So, if you can act as an enabler for the team irrespective of what culture and background, if you can get into a dialogue where you identify your blocker and remove that, then I think that establishes you as a leader across boundaries.

Q A

What is the one quality that you believe a leader must have to succeed?

First and fundamental thing is your integrity and ability to build trust. I like to call it the courage to stand in your truth, which gives you substance and gravitas. It easy for other teams to sense. It is also something that you cannot pretend. You need to bring the authenticity of yourself to the job, and you must truly care about the success of others. The second is communication. Creating clarity, generating energy, and then driving them for success are the three things that are the pillars of your communication. Pg 7

March 2021

The way you connect with people to drive them to do what is necessary, make it very clear without being confusing, and when there is a high level of uncertainty in the outcome, that time the way you still motivate the individual enough to go for it are the leadership principles that can make one successful.

Q

You have served as a Tech Sales Executive for IBM. How do you think the rise of Blockchain technology will lead to the evolution of supply chain management?

A

Fundamentally, Blockchain tries to take away third-party trust. It creates an environment where the contributing parties can mutually trust each other. The technology becomes the trust part. So if you say something, it becomes immutable on the Blockchain, and hence the rise of the Blockchain was for cryptocurrencies, where the spirit was the need to trust a bank. It gave rise to optimizations in asset management because a consumer who has no data apart from that provided on the label of the product must know about its origin. Blockchain will give traceability and accountability, optimized using AI. You can automate the payment process using the policies in the Blockchain. So there are many such things in the supply chain that Blockchain can optimize and automate. Supply Chain optimization is an AI problem, but many use cases are consumerdriven, which can be enabled using Blockchain.

Q

Is it viable to say that we will see the applications of Blockchain in every other field in the future, concerning the security factor?

A

We saw a lot of POCs uptake in 2016-17 in the Blockchain world. Very few converted into production because of the promise of the Blockchain that you do not have to put inherent trust in anybody. That is not how any of the businesses, governed industries, or regulations work. You try to create a consortium, and once you do it, is a blockchain required in the presence of a trusted network? ISSUE 16.1

CREDENZ.IN


X-beam scintillators, using a naturally agreeable materi-

al, compound natural manganese halide performs very well for the process of imaging. It can be used to make a powder, which is joined with a polymer to make an adaptable composite, as a scintillator.

These kinds of challenges, where the promise of the Blockchain of removing a third party trust versus the trusted network on which the businesses operate, are two different entities. Many people in the community did very different kinds of POCs, which leveraged not only the trust factor of the Blockchain but also leveraged other factors as well. I think that is where the growth is and where we will see enough use cases getting into production.

Q

IBM Q has taken many initiatives to make quatum computing open-source, one of them being Qiskit SDK. What different use cases will it lead to in the future?

A

Almost every company tries to bring up its software development toolkit for the democratization of this technology to fuel the development ecosystem. Right now, it looks very promising for quantum computing kinds of problems, optimization problems, and problems that have an exponential scale. If we consider finance, portfolio, fraud detection, supply chain, etc., optimization problems, then optimization cuts across these problems. In this field of Quantum computing, there is new material finding, precision medicine, medical research, pharma, and many other industries where people will be interested in solving some of these problems. There are Hamiltonian problems, for which people try to find alternate energy resources, create new materials, for which there is no solution today. The key ones everybody seems to focus on are pharma, medicine, BFSI, manufacturing, etc. Hence, imagination is the key for us to understand which industries will be impacted by quantum computing. There would be some people who will understand the potential and come up with a use case that would be life-changing and era-defining in the field of Quantum Computing because the technology holds so much promise.

Q

How do you integrate the knowledge of so many domains into the work you do today?

CREDENZ.IN

March 2021

A

It comes naturally. As I said, the dots will always connect in the path. No knowledge gets wasted. It’s just that you will apply different parts of the same knowledge you possess in other areas to solve the problem. For example, When I moved to IBM, I leveraged the learning I had, how to make the application scalable and connect back into some of the server-side, the storage, and the computation to figure out how to use this knowledge well in the technical sales profession for selling the infrastructure. When cloud computing was a buzz, I started leveraging grid computing to understand cloud computing. Again, when I was a beginner learning quantum computing and wanted to use my skills to prioritize my business problems, I recalled my basics of atomic physics and the knowledge I got from the telecommunication industries. According to me one should figure out and apply the knowledge he/she possesses to do a specific task.

Q A

What message would you like to give to the readers of P.I.N.G. 16.1?

Do not run after money in your early career. Perceive excellence, success will follow you. Don’t let people define what success should mean for you because the whole external world would think about success in terms of power and position or salary package. But we must internalize, seek within our heart and understand the fact that we can bring power and position by doing our best at what we do.

We thank Mrs Shalaka Verma for her valuable time and contribution towards P.I.N.G. -The Editorial Board ISSUE 16.1

Pg 8


SpaceX Demo II

editorial

envisioning martian civilisation

A

new era of human spaceflight is set to commence as Elon Musk’s rocket company SpaceX launched commercially built and operated Falcon-9 rocket carrying Crew Dragon spacecraft on NASA’s SpaceX Demo-2 mission to the International Space Station. The SpaceX Spaceflight lifted off with Astronauts Robert Behnken and Douglas Hurley, at 3:22 P.M. EDT, May 30, 2020, from Launch complex 39A at the Kennedy Space Center in Florida.

“This is hopefully the first step on a journey toward a civilization on Mars” said Elon Musk, Founder and CEO at SpaceX. This Demo-2 mission is a part of NASA’s commercial crew program to develop reliable and cost-effective access to and from the ISS and facilitate the development of human spaceflight systems. This mission marked the final flight test for the Spacecrafts’ system and intends to validate its different components, including the spacecraft (Crew Dragon), the launch vehicle ie. the rocket (Falcon 9), the launch pad (LC-39A) and the operations capabilities. Spacecraft-The Crew Dragon: The Crew Dragon (Dragon-2) spacecraft used in the mission Demo-2, is the successor to the Dragon-1 cargo. Unlike its predecessor, Dragon-2 can dock itself to the ISS instead of being berthed, and carrying up to 7 passenger astronauts to and from the ISS.

Pg 9

March 2021

Crew Dragon is equipped with an integrated launch escape system (LES) capable of accelerating the vehicle away from the rocket in an emergency by using a system of four side-mounted thruster pods with two SuperDraco engines at each pod. These Sixteen Draco thrusters are used to maneuver between apogee/perigee, orbit adjustment, altitude control, and orient spacecraft during the mission. An arrangement of eight SuperDraco engines, having an escape thrust of 73 KN provides faulttolerant propulsion for Dragon’s launch escape system. During any emergency, these eight SuperDraco engines can power the spacecraft half a mile from the rocket in a few seconds. The SuperDraco engine is the earliest 3D printed rocket engine. The engine’s combustion chamber is printed using an alloy of nickel and iron called Inconel, using the process of direct metal laser sintering. It has a printed protective nacelle which helps it to function well at high chamber pressure (6,900 KPa), high temperature, and to prevent fault propagation in the event of an engine failure. It utilizes a storable hypergolic propellant mixture of monomethylhydrazine (MMH) fuel, referred to in the specification as MIL-PRF-27404 and dinitrogen tetroxide oxidizer, making it capable of being restarted many times, deeply reducing the thrust, and providing precise control during the propulsive landing of the Dragon capsule. Pressurized Section( Volume: 9.3 cubic meters): The Pressurised section of the spacecraft, also referred to as the Dragon capsule, allows for the transport of humans as well as environmentally sensitive cargo. Towards the base of the capsule and outside the pressurized structure are the Draco Thrusters, Dragon’s GNC ( Guidance, Navigation, and Control) bay, and Dragon’s heat shield. Trunk (Volume: 37 cubic meters): The Dragon’s Trunk supports the spacecraft during ascent to space, carries unpressurized cargo, and houses Dragon’s solar panels which provide power during the voyage, and while on the station. ISSUE 16.1

CREDENZ.IN


Technical Overview of Crew Dragon SpacecraftHeight Diameter

8.1 m 4m

Capsule Volume Trunk Volume Launch Payload mass Return Payload mass

9.3 m3 37 m3 6,000 Kg 3,000 Kg

Rocket Falcon-9: Being the first reusable orbitalclass rocket, Falcon-9 has managed to overcome the major issue of space travel being expensive, by having the crucial parts of the rocket fly back. SpaceX has designed this for transportation of payload and safe travel for people, having carried out 85 missions since its inception. This is the most ever for a U.S rocket. Its debut was a lot more modest than the latest launch at NASA; on June 4, 2010, it successfully launched a mock version of the Dragon, which is SpaceX’s reusable cargo spacecraft, and after 6 months it launched an actual Dragon to space and recovered successfully, thereby becoming the first to do so. This 230-foot tall dual-stage booster can potentially loft 25 tons of payload to low earth orbit. Its architecture consists of the first stage, which holds nine Merlin engines, the second stage, powered by a single Merlin Vacuum Engine, and an intermediate stage, responsible for holding the two stages together. Block-5 is the developed version of its predecessor Falcon-9 v1.0. It assures the increased supply of propellant for the first stage landing.

Octaweb is a bolted aluminum structure, which increases the thrust at sea-level to 1.9 million pounds-force. The first stage of the Falcon-9 returned to Port Canaveral on June 2, 2020. The assistance of 9 Merlin engines can pick up the slack and can potentially nullify the chances of failure. During its descent, the rocket flips around using the cold gas thrusters. In the very final phases, three Merlin engines fire for boostback burn. The interstage is a carbon composite structure provided with a 5.2 meters long fairing, to protect the satellites. Pneumatic pushers incorporated in this stage allow the smooth separation of the first and second stages. The hypersonic titanium grid fins at the base lead the rocket while re-entry. According to Elon, with these robust technologies and efforts by scientists, engineers and astronauts at SpaceX, they have taken their first step towards civilisation on Mars. The second stage has a vital role to play. It leads the payload to the desired orbit using a single Merlin engine. The Composite Overwrapped Pressure Vessels or COPVs store enough helium to pressurize the propellant tanks. This has increased the thrust to 2.2 million pounds-force. Falcon-9 flights are also dedicated to building a Starlink constellation, which will assure highspeed Internet connectivity to all sectors of the world. SpaceX aims to launch such 1000 more satellites after already launching 480 Starlink satellites since May 2010.

The first stage is completely reusable, provided with a latch mechanism in the carbon-fiber landing legs for independent landing. It incorporates aluminum-lithium tanks to hold liquid oxygen and RP-1 propellant. It also integrates nine Merlin engines, which are compartmentalized by an Octaweb Structure. Pg 10

March 2021

-The Editorial Board ISSUE 16.1

CREDENZ.IN


Unraveling the techie

interview

with Mr Navindra Yadav With over 25 years of experience in Computer Science and Networking, Mr Navindra Yadav has been a prominent figure in the IT and networking Industry. Based on his experience and technical expertise, he went on to start Tetration Analytics, a unit of Cisco Systems. He is an innovative and out of the box thinker with a wealth of experience in building systems ranging from large distributed systems to embedded platforms. Being a research enthusiast, he has over 143+ issued US patents and 100+ pending patent applications. Aside from his illustrious professional career, Mr Navindra Yadav is an avid reader of scientific and technical books.

Q

You’ve worked in Tetration Analytics- a unit of Cisco System Inc. which focuses on data centre monitoring, security, and data analysis. What major difference has this made in Silicon Valley?

A

We have generated and discovered whiteless security policies and produced a zero-trust environment for cybersecurity inside a data centre for cloud computing. All the security rules put in place in the industry are human-driven. Usually, when a cyberattack happens, human made policies are the first to be used as a response e.g. blacklist rules. Even today, the majority of the data centres are designed with traditional parameters only. Cisco Tetration addresses this challenge in a comprehensive way using a multidimensional workload protection approach, AI and ML to discover these zero-trust policies and dynamically pushes these into the margin.

Mr Navindra Yadav Ex Cisco Fellow , Founder of Tetration Analytics

“The more books you read, the more avenues you discover and the more ideas you are introduced to. ”

Looking at network fabrics and switches, the key difference between switch and router is that routers have large and deep buffers, which are used to hold packets and handle asymmetric

bandwidth links. Whereas switches have shallow buffers built for really low power. Any big company with huge data centres like Amazon, Google, Microsoft etc. uses switches inside the datacentre fabric. 30-50% of the data is carried using network switches. However, once it is saturated, the TCP starts collapsing. Since we cannot increase the buffers of the switches in traditional networks, if data flows it follows only one TCP path in the entire network fabric to avoid packets from getting reordered. The thought while building CONGA was that the fabric itself is used to measure realtime latency at any instance of time.

Pg 11

ISSUE 16.1

Q

Your most-cited publication “CONGA- network-based distribution congestion for data centres” is based on load balancing (distributed systems). What challenges were you setting out to address when you started to work on this? What advantages does the CONGA have over conventional networking load balancers used today?

A

March 2021

CREDENZ.IN


Boron-lanthanide nanostructure is a spherical tetrahedron composed of eighteen boron and three lanthanide atoms. It uses the technique of photoelectron spectroscopy, which fires a powerful laser beam on the cluster to vaporize the desired compound.

So inside the fabric when we observed that the latency difference would never reorder packets, we sent packets in the same flow on different paths to optimize the network. This allowed the paths to cross the 50% threshold so that at the same cost of the fabric. You can almost double the number of servers connected to the network. If you see the research work done with CONGA on this, you will observe that when we run big data calculations on this data centre fabric with CONGA, the entire process gets completed faster. Now, this technology has become standard for big companies as well.

Q

You started with Siemens in 1991 and today in 2020 you have over 143 US patents of your own and many more pending. So, what was the biggest challenge in your professional life and how did you combat it?

A

In terms of challenges, the biggest challenge I’ve faced in my career was founding, starting, and building Tetration. Before Tetration Analytics, I had always been an engineer, an individual contributor who never managed people. My challenges involved the transition from being a pure engineer to becoming an engineer and a businessman who understands the business that includes building a company, understanding how markets operate and the changing buying patterns because that is needed whenever you build a start-up. Tetration was a start-up inside Cisco as they funded us, but first I had to hire people. Then the next part was building a business. You need to understand technology as well as the problems faced by a customer because, in the customer environment, the purchasing group is often completely different or disconnected from the people who have the problem. Understanding how you connect all of these and defining a good market motion were challenges but it’s been a true learning journey.

Q

While coming up with ideas for so many patents, have you ever reached a saturation point in terms of innovative ideas? How do you overcome this barrier? Pg 12

March 2021

A

Truly speaking, no; Because before I reach a saturation point, I keep jumping into different areas. I just look at a space of problems and try to fill that space with solutions. Most companies have legal teams that handle patent mining and the legal aspects of a patent. That’s why the patents that I’ve filed in my life are those that actually make business sense to the company I was working for. What really matters is whether you can execute ideas and make a difference in the real world. My ideas have been developed from things that I have read. We choose a certain problem to work on and simultaneously there’s something new that keeps coming in.

Q

You’ve been working in Cisco for more than 17 years now, and have worked in companies like Siemens, Lockheed Martin and Google. What would you suggest to students who aim to have a successful start-up?

A

For anyone who wants to build a technical start-up, you need to understand what you’re building and you need to have a good business idea or a good technology idea. The next thing to understand is that one cannot operate individually. Not unless you have all these capabilities inside you. Build strong teams, it’s not about you, it’s about what the team can achieve together. So find people in diverse fields and bring them together; a person who understands business, how the market really works, how to take a product to the market, what are the price points, and how to adjust the price points based on different criteria is essential for managing the business aspect, another person who knows how to market the product, how to create brand awareness, etc. and then get someone who understands technology and knows how to build the product. The most successful start-ups have this chemistry between the founders. They watch each other’s backs, cover for each other’s weaknesses and link the strengths of the other party. Also, they must have good recruiting skills. It’s the energy in the team that matters the most. Know your goals with the start-up.

ISSUE 16.1

CREDENZ.IN


Trojan Horse, a nanoparticle with a coating of L-phenylal-

anine amino acid helps to self-destruct cancer cells and reduce tumour growth. These coating of amino acids on nano- pPAAM stimulate excessive reactive oxygen species (ROS) production.

Depending on the goals that you have, you also need to manage capitalists in your enterprise. Another thing that has enabled me to run a successful startup is the Indian culture, which has taught me the importance of being humble.

Q

With the experience of having worked on at least 200+ patents, what is your mantra for balancing a challenging career and pursuing your interests?

A

Technology has been my passion. Large bookshelves are running along the walls of my room and I keep reading different books on technology or whatever interests me. Regarding the number of ideas I get for my patents, the more books you read, the more avenues you discover and the more ideas you are introduced to. I have a mixed background in computer science, hardware, electrical, low power electrical design. So, it doesn’t have to be just computer science. I read books on electrical engineering, biomedicine, antibodies, etc. as well. Out of all the books that I’ve read, there is a series called ‘The Art of Computer Programming’ by Donald Knuth. If I would have read this during my engineering days, I probably might not have understood it as much as I do now. My passion aligned with my work has been a blessing in disguise for me because I keep reading random things and different ideas pop up in my mind. Sometimes I have to think about whether it could become a good idea or not. That’s where the go-to-market experience comes in. While working on the business side at titration, while growing there and building it, I got that exposure. It’s okay to have ideas but before executing it you need to question yourself on how you’ll execute it, grow it, the business needs that are to be solved and whether it will be a commercial success or not.

Q

Jeffry Taft has mentioned your ability to tackle entirely new classes of problems, coupled with your extensive expertise and inventiveness, has allowed us to make great progress on key problems related to smart grid communication. What is unique about your problem-solving approach? Pg 13

March 2021

A

Jeff and I worked together from 2007-10. He came from the true electrical domain which I wasn’t exposed to since I had received a degree in CS. Whatever I had studied till then tended to be of low power designs, DC circuits, etc. My expertise was mostly around network communication. When we started working together in Cisco, I ended up learning a lot from him. One of the things I like to do is to always come back to the basics of any subject/ project I’m picking up. If I can explain to myself without having a hundred doubts, that means I’ve understood what needs to be done. Another important thing is to stay humble enough to ask questions.

Q

Being someone who has been working in the profession of software engineering and data science, what according to you are the issues that the industry still faces at present?

A

Data science is an interesting field. Computer scientists go and apply statistics in a field called machine learning. So, although these practices have been going on for a long time, we just started turning that into algorithms so it’s not exactly new. Regarding the challenges around data science, privacy is a big challenge with traditional big data systems where you’re capturing this data. Security is another challenge. The next field is Artificial Intelligence. When we work on Neural Networks with multi-layers of depth in it, we as humans cannot understand or process it by our brain beyond ten layers of logic. The models that the computer generates can go deeper than seven or ten layers of the Neural Network’s logic and the moment it goes deeper than that, humans themselves don’t get to understand what the computers are doing for themselves.

Q

Given the amount of experience you have, could you enlighten our readers by explaining the importance of learning effective team management?

ISSUE 16.1

CREDENZ.IN


Artificial Neurotransistor is the first-ever imitation of a

neuron, which allows a system to learn. It uses a polymer called Solgel, applied on a silicon wafer along with circuits. This arrangement leads to a free flow of ions that are heavier than electrons.

A

After starting Tetration Analytics I started managing people. The thing that I could implement is really simple, that is being transparent to your employees. The biggest thing is trust that has to be built between them and you. Given that I’m an engineer and most of the employees at Tetration Analytics are engineers as well, I understand their suggestions or different doable options. You need to learn that when you are working in a team, ideas don’t have to be solved in the way you want. You may strongly believe that your approach is right, but you must remember to give them the freedom to make mistakes and feel a sense of responsibility. That’s my basic philosophy. Give them every reason to trust you and vice-versa.

Q

The world has seen a rapid change in the sector of Software Engineering at the beginning of its twenty-first century. Where do you envision this technology will take us in the next few years?

A

In the coming years, a lot of softwares will be written by other softwares. I think that transition is coming. We even see a lot of insightful situations like we don't use humans to test our software. Nowadays, we use software for the desktop, which finds bugs inside it and rewrites it to test itself. So, I believe a lot of mundane software in the next five to ten years from now will be written by the software itself. People write machine learning software that is capable of writing machine learning algorithms with the help of libraries. Using ensemble methods, and supervised learning algorithms we can predict the appropriate results. Being passionate about cybersecurity, I see that technology has a huge potential. There’s an encryption technique called homomorphic encryption where algorithms work on top of encrypted data and produce results, which could make sense to us only when they are decrypted.

Q

Could you please tell us the skills and qualities that every engineer should possess so that they can contribute better to the industry? Pg 14

March 2021

A

One of the few things I could say is, be deeply persistent and have faith because challenges will always come in life. Sometimes you don't get solutions to a problem. So, don't give up easily. If you are on a mission or you really believe in something, and want to make it happen then it should be within you that will automatically drive your grid. My general philosophy is that don't get too serious about things as well as be nice to other people.

Q A

What message would you like to give to our readers?

Steve Jobs once said, “Stay hungry, stay foolish”. Making it easier is always inquisitive. There is absolutely nothing wrong with asking questions until you get answers to them. Asked questions will be inquisitive throughout your life. You will eventually discover what directly clicks you. The more you get to take risks in your career, the more are the chances that you will taste success. You have the freedom of moving and freedom of thought, so focus on your goals. There are a lot of things to explore in this world. The kind of exposure you get and the experiences you’ll gain will always help you in the future.

We thank Mr Navindra Yadav for his valuable tim and contribution towards P.I.N.G. -The Editorial Board

ISSUE 16.1

CREDENZ.IN


Voice Technology

maven

Demystifying Speech Recognition

V

oice technology is not a new concept anymore. Voice recognition and experiences have come a long way and have advanced to the point where it seems all-natural. However, voice technology is not something recent and roots back in the 1950s.

Alexa’s natural-language-understanding models classify requests according to the domain or the particular service that should handle the intent that the customer wants to execute. The models also identify the slot types of the entities named in requests or roles those entities play in fulfilling the request. It decreases the efforts to expend in authoring complex dialogue management rules. Dialogue management for Alexa Conversations is powered by a dialogue simulator for data augmentation and a conversations-first modeling architecture. Dialogue Simulator generalizes a small number of sample dialogues into tens of thousands of annotated dialogues, and then the modeling architecture leverages generated dialogues to train deep-learning-based models to support dialogues beyond just happy paths provided by the sample dialogues.

In 1990, Dragon released Dragon Dictate, which was the first speech recognition software for consumers. By 2001, Google invented an application, called Google Voice Search, which utilized data centres to compute the enormous amount of data analysis needed for matching user queries with actual examples of human speech. In the year 2011, Apple launched Siri, which was similar to Google Voice Search. Alexa is a virtual assistant AI technology capable of voice interaction and provides real-time information. Alexa can also use itself as a home automation system to control other devices. Users can extend the Alexa capabilities by installing ‘Skills’. Amazon allows developers to build and publish their skills using the Alexa Skills Kit known as Alexa Skills. Alexa developers can now leverage a dialogue manager powered by deep learning to create complex, nonlinear experiences.

With Alexa Conversations, the dialogue simulator automatically generates diversity that covers skill functionality, and it also generates difficult or uncommon exchanges that could occur. The input to the dialogue simulator includes developer application programming interfaces (APIs), slots and associated catalogues for slot values (e.g. city, state), and response templates (Alexa’s responses in different situations, such as requesting a slot value from the customer). These inputs, with their input arguments and output values, define the skill-specific schema of actions, and slots that the dialogue manager will predict. The dialogue simulator uses these inputs to generate additional sample dialogues in two steps.

Alexa Conversations is a new AI-driven approach to dialog management that enables you to create skills that customers can interact with in a natural, less constrained way, i.e. by using the phrases and in the order they prefer.

1) The simulator generates dialogue variations representing different paths a conversation can take. More specifically, we conceive a conversation as a collaborative, goal-oriented interaction between two agents, a customer and Alexa. In this setting, the customer has a goal to achieve, and Alexa has access to resources that can help the customer reach the goal.

CREDENZ.IN

ISSUE 16.1

March 2021

Pg 15


Shewanella oneidensis is anaerobically metal breathing

bacteria capable of producing materials such as molybdenum disulfide which is a material able to transfer electrons easily can help electronics, electrochemical energy storage and drug-delivery devices.

From the sample dialogues provided by the developer, the simulator first samples several plausible goals that customers interacting with the skill may want to achieve. Conditioned on a sample goal, synthetic interactions between the two simulator agents are generated. The customer agent progressively reveals the goal to the Alexa agent, while the Alexa agent gathers the customer agent’s information, confirms it, and asks follow-up questions about missing information, guiding the interaction towards goal completion.

It includes a carryover of entities, anaphora, confirmation of slots and APIs, and proactively offer related functionality, as well as robust support for a customer changing her mind midway through a conversation. The Alexa Conversations modeling architecture uses state-of-the-art deep-learning technology and consists of three models: a named-entityrecognition (NER) model, an action prediction (AP) model, and an argument-filling (AF) model. The models are built by combining supervised training techniques on the annotated synthetic dialogues generated by the dialogue simulator and unsupervised pre-training of large Transformerbased components on text corpora. Alexa follows an end-to-end dialogue-modeling approach where the models consider the current customer utterance and context from the entire conversation history to predict the optimal next actions for Alexa.

2) In the second step, the simulator injects language variations into the dialogue paths. The variations include alternate expressions of the same customer intention. Some of these alternatives are provided by sample conversations and Alexa response templates, while others are generated through paraphrasing. The variations also include alternate slot values, which are sampled from slot catalogues provided by the developer.

First, the NER model identifies slots from customer utterances, selecting from slots that the developer defines as part of the build-time assets. The NER model is a sequence-tagging model built using a bidirectional LSTM layer on top of a Transformer-based pre-trained sentence encoder. In addition to the current sentence, NER also takes dialogue context as input, which is encoded through a hierarchical LSTM architecture that captures the conversational history, including past slots and Alexa actions.

Through these two steps, the simulator generates tens of thousands of annotated dialogue examples that are used for training conversational models. A natural conversational experience could follow any one of a wide range of nonlinear dialogue patterns supported by dialogue-simulator and conversational-modeling components.

Next, the AP model predicts the next optimal action for Alexa such as calling an API or responding to the customer, to either elicit more information or complete a request. The action space is defined by the APIs and Alexa response templates, that the developer provides during the skill-authoring process. The AP model is a classification model that, like the NER model, uses a hierarchical LSTM architecture to encode the current utterance and past dialogue context, which ultimately passes to a feed-forward network.

Pg 16

ISSUE 16.1

March 2021

CREDENZ.IN


Self-erasing chips are a three-atom-layered thick layered,

temporarily energy storing semiconductor device laid on a thin film of azobenzene molecules. These chips could potentially help stop counterfeit electronics using a self-erasing bar code.

Finally, the AF model fills in the argument values for the API and response templates by looking at the entire dialogue for context. Using an attentionbased pointing mechanism over the dialogue context, the AF model selects compatible slots from all slot values that the NER model recognized earlier. The AP and AF models may also predict and generate more than one action after a customer utterance. Therefore, they can make sequential predictions of actions, including the decision to stop predicting more actions and wait for the next customer request.

Arkonovich claims that Alexa Conversations promises to be a breakthrough for developers writing Alexa skills. It will create new experiences for the customers, which can be provided by supplying dialog and without writing lots of code. Alexa’s AI generates sample utterances and keeps track of the context, all with very little input from the user’s skill code. The user can use Alexa conversations to extend current skills without rewriting the entire codebase. Users can speak more naturally or can change their minds mid-conversation, and Alexa will keep up.

Consistency check logic ensures that the resulting predictions are all valid actions, consistent with developer-provided information about their APIs. The inputs include the entire dialogue history, as well as the latest customer request, and the resulting model predictions are contextual, relevant, and not repetitive. Leveraged large pre-trained Transformer components (BERT) to encode current and past requests in the conversation. To ensure state-of-art model build-time and runtime latency, inference architecture optimizations are performed, such as accelerating embedding computation on GPUs, implementing efficient caching, and leveraging both data- and model-level parallelism.

Arrive offers parking automation solutions with Alexa to help their customers find, book, pay for and navigate to thousands of parking spaces. Alexa Conversations helps make Arrive’s in-car experience more functional and satisfying without changing the existing code.

In 2019, Alexa worked with OpenTable, Uber, and Atom Tickets to get feedback on early product design and work on a concept skill. As part of the Alexa Live virtual developer event, Alexa Conversations preview participants shared anecdotes from their hands-on experience. Today, the iRobot Home skill allows customers to schedule cleaning with their Roomba robot vacuum or Braava jet robot mop, but the rigid dialog requirements offer a limited experience. Managing this openended task with Alexa Conversations allows customers to follow any number of dialog paths, make changes without starting over, and speak more naturally. Philosophical Creations founder Steven Arkonovich saw a chance to improve the interactions for his Big Sky skill, giving customers more freedom in how they ask for hyperlocal weather information. CREDENZ.IN

March 2021

Jeff Judge, CTO at Arrive, said that they are very excited about the potential of Alexa Conversations to improve the skill experience by training dialog models with real user interactions. They envision a future where skill developers can focus on delivering the most meaningful content within their skill, leaving the heavy lifting of input processing to the Alexa Conversations Engine, which is a huge step.

-Shridhar Pathak Sr Technical Program Manager Amazon ISSUE 16.1

Pg 17


Kalaam

maven A language for everyone

W

hen we talk about technology, startups, or top developers, we will usually find them in metro cities. That’s a no brainer because the quality of education students get in metro cities is superior. However, it can’t be the same for the talent or willingness to learn. Students in rural and semi-urban towns of India have no shortage of talented students, but they lack awareness and guidance about career opportunities.

Software drives everything, and in the future, its impact will be more pronounced. In 20-30 years, programming will become as important as being able to read and write. An interested student may lag just because of unawareness and an alien-like feeling towards technology. Kalaam is developed to work on this issue. It provides a native coding platform to all such students so that they can break that language barrier and learn to code. The first line of code for Kalaam was written in January 2020. It is an interpreted language written in Javascript. This makes it web-based. This feature makes it easier to access and puts no restrictions on the devices used to access it. It has also ousted the need for architecture to run it, minimizing the steps to program. It is dynamically typed, which means it figures out data types of variables on the fly.

It is created as per the Devanagari script. It is driven by regular expressions, which help us perform the syntax analysis. The first step in any programming language is Lexer or Tokenizer. It takes a statement and spits out individual tokens. For example, Name = “Swanand” will become Name, =, Swanand. A parser will operate on this batch of tokens stored in an array to build a parse tree. A parse tree is a high-level representation of source code. It stores metadata about each statement written in your code. An interpreter-based programming language, such as Parse tree or Abstract Syntax Tree (AST) will be directly executed to generate the result. There are different approaches to this depending upon the performance requirements. Some might be converted into bytecode and run on a virtual machine while some might get directly compiled to native machine code. An interpreter is more flexible than a compiler and helps you debug better. Kalaam.io also comes with a "Learning mode", which helps students to understand the process occurring on the execution of a particular line of code. That is made possible by the ExecutionStack of Kalaam. Kalaam, as of now, is used for educational purposes as it is in its early stage. Later on, it will be defined with a use case, maybe Visual programming. Natural Language Processing will be added to Kalaam.io to facilitate voice typing. Kalaam will be made open-source soon in the form of an npm package so that developers can start contributing to it. Kalaam.io is live with its full-fledged v1.0.0 version, which supports one of the most widely used languages in India ‘Hindi’ and a regional language, ‘Marathi’. Additionally, the user can code in English as well. Certainly, Kalaam has broken the language barrier in programming for the richer minds of rural India.

The syntax design of Kalaam is inspired by both Javascript and Python, as these languages have a reputation for being easy to understand. Pg 18

March 2021

-Swanand Kadam Lead Architect of Kalaam

ISSUE 16.1

CREDENZ.IN


Covid-19

unforseen diagnosing the outbreak

C

oronaVirus disease 2019 or ‘Covid-19’, declared as the cause of a global pandemic by the World Health Organization (WHO) on 2nd March 2020, is a highly contagious disease caused by SARS-CoV-2. Since the spread of the virus continues to cause deaths and disruption, successful ways to eventually eradicate it were found with technology as a weapon.

Covid-19 is a part of a large family of viruses called coronavirus. It mainly affects the upper or lower respiratory tract and can cause pneumonia, kidney failure and even death in severe cases. The pandemic put a great burden on countries as a result of quick and effective laboratory diagnostic testing of the virus. A new Optical Biosensor effectively eased this job by serving as an alternative test method. It uses Localized Surface Plasmon Resonance technology, with artificially produced DNA receptors complementary to the RNA sequences of the virus grafted onto it to detect the virus. Eventually, the time to find the virus in samples was reduced to thirty minutes by a method designed by POSTECH researchers. The test kits produce a nucleic acid binding reaction, which shows fluorescence only when Covid-19 RNA is present.

Pg 19

March 2021

A new portable lab on a chip, developed by U-M scientists, is a faster on-site approach to identify Covid-19 virus. With eight microlitres of blood, it can identify Covid-19 antibodies in just fifteen minutes. The outbreak impelled many hospitals to face the challenges associated with it. But even during these tough times, technology has effectively improved healthcare resilience over countries. The HINSlight Environmental Decontamination System effectively serves as a decontamination technology at hospitals. It uses the HINS-light in combination with LED technologies to produce a warm white lighting system. This pandemic is unique because of its scale and speed of spread. Upto 31st January 2021, about one hundred million cases were reported all over the world, and it was necessary to maximize the reach of healthcare amenities to the increasing number of patients. Siilo, a healthcare messenger virtual care platform, uses video conferencing and digital monitoring helping to deliver remote services in this pandemic, to bridge the gap between patients and medical consultants. It allows physicians to virtually assess, securely share notes, scan medical reports or tests and discuss patients’ condition, and provide proper medication and treatment ensuring proper care. In these unprecedented times, China has been seen as one of the successful countries actively leveraging technology to combat the pandemic. Utilizing its sophisticated and expansive surveillance network, technology giants Alibaba and Tencent have developed a colour-coded health rating system that is tracking millions of people daily. It assigns people green, yellow or red color based on their travel and medical histories. China used ‘BeiDou’, its GNSS constellation and Radio Determination Satellite Service which keeps track of infected people, delivering medical equipment in remote places of Wuhan using Robots deployed by ‘CloudMinds’ such as Cloud Ginger (XR-1) and the Smart Transportation Robot. ISSUE 16.1

CREDENZ.IN


Polymerase chain reaction (PCR) is the process

by which the cDNA molecules, which are the copies of RNA transcripts from each cell are amplified to get enough copies of the DNA for sequencing.

Robots, used to supply food and medication to patients without human contact, are based on BeiDou. Many of the medical devices were enabled with IOT to make it feasible for robots to use. ‘Apollo’, Baidu’s autonomous vehicle platform, ‘Neolix’ and ‘Antman’ -a drone maker company delivered medical supplies and food at hospitals. In India, the ‘Arogya Setu’ app has been launched, which uses the phone’s location data and Bluetooth to assess if you have been near a person infected by COVID-19 by looking through a database of known cases. The app also provides information, news and updates about the disease from the health ministry. To direct the response based on emerging expertise and evidence, a technical expert committee comprising public health experts, virology experts, and clinical experts has been formed. In order to direct the response, three other clinical care and management committees, the death audit committee, and telemedicine, headed by expert intensivists, pulmonologists, and clinicians, were created at the state level. Synergies between the Department of Health and Family Welfare, and the Department of Medical Education have helped to streamline the rapid response to this pandemic through taking care of the curative aspects of health by tertiary health care facilities, education, and training of health workers by physicians, nurses, and paramedical staff. Few companies have helped countries to combat and control the spread of the virus. In South Korea, mobile phone satellite technology is being used to trace potential carriers by collecting data from security camera footage, facial recognition tools, bank card reports, GPS data from vehicles and mobiles, thus providing real-time data. Using Mobile technology, Iceland is able to collect data on the patient-reported symptom. The collected data is then compared with the datasets such as clinical and genomic sequencing data to predict the spread of the virus.

In Singapore, a mobile application is used which exchanges short distanced Bluetooth signals whenever a person is near each other. Whereas Germany is using a smartwatch application that collects data such as pulse, temperature, sleep pattern data and signs that shows illness. Hong Kong has developed a wristband which uses cloud technology to a database which sends messages to authorities if someone breaks a quarantine period. Researchers have been working around the clock to develop effective vaccines, which people started receiving in the UK in December 2020. Today different vaccines are now available in various countries, which needs their approval for use. They need to pass through three phases of tests to prove that they are safe and effective. The last stage, phase 3, involves tens of thousands of participants, and then the vaccine is rolled out for vaccination. By February 2021, nearly a dozen vaccines were approved in various countries across the world. They are Comirnaty (BNT162b2) by Pfizer, Moderna mRNA-1273, AstraZeneca (AZD1222), also known as Vaxzevria, and Covishield, Sputnik V, Janssen Vaccine, CoronaVac, BBIBP-CorV, EpiVacCorona, Convidicea (Ad5-nCoV), and Covaxin (BBV152). Even though we’ve been devastated by this pandemic, in this era of technology, we have successfully deduced numerous containment strategies. As the struggle to find the vaccine is over, people around the world will work together to bring down the numbers. With the success of technological uses, the world will increasingly rely on digital technology to help build resilient societies to the pandemic.

-The Editorial Board CREDENZ.IN

March 2021

ISSUE 16.1

Pg 20


New Section Achievements beyond metal robots

honouring ideas

congenial featured

Lifesavers

I

n a world that is struggling to rebuild its healthcare infrastructure, mental health has never been prioritized. Every 40 seconds, one person commits suicide worldwide, and this issue demands urgent concern.

The event was sponsored by the Digital Medicine Society, Pf(IR)e Lab, Massachusetts eHealth Institute and many more healthcare research labs and companies.

Determined to tackle this trend using technology, a team of diverse individuals, including Aboli Marathe, a student of Computer Engineering at Pune Institute of Computer Technology won the Second Prize at MIT Grand Hack 2020 with their proposed technology: Lifesavers and won a grand prize of $1000.

The team was mentored by professors and entrepreneurs from universities across the world, including Harvard and MIT. Team Lifesavers: Digital Clinical Measures of Activity won the second prize with a truly innovative solution. The Lifesavers is a passive tracking app to detect acute behaviour changes before suicide attempts using existing activity trackers. It analyses changes in sleep time/duration, sleep activity, as well as motor agitation like walking pace, social connectedness and uses machine learning to find out when the individual is at great risk of committing suicide. The team consisted of physicians, entrepreneurs, researchers and even high school students from India and the USA, with a common goal of reducing suicide rates globally. After receiving recognition at this event, they plan to continue working on this product and release it in the market soon.

As the COVID-19 pandemic has shown the world how strong emergency preparedness can only emerge from robust healthcare systems, from 2nd to 4th October 2020, MIT Hacking Medicine organized a global Hackathon, MIT Boston Grand Hack 2020, focusing on disrupting healthcare and changing these trends worldwide. The tracks of the Hackathon included Customized Cancer Care, Digital Clinical Measures of Activity, Future of Aging and Access to Healthcare during COVID-19.

Pg 21

March 2021

ISSUE 16.1

CREDENZ.IN


Unsinkable metal, made using two grooved aluminium disks separated by a small pillar, uses a giant air bubble to float. This array can potentially sustain damage and float subsequently.

CoVidSpy

T

he team of Tanmay Jain and Krisha Bhambani, third - year Bachelor of Engineering students, was announced the runner up team in the global category amongst more than 200 participating teams with the prize amount of $10,000 for their solution CoVidSpy at the international event, Better Health Hackathon: #CodeForCovid19.

Once a person enters the organization, he or she will be under surveillance. It evaluates each person’s position concerning another person and determines if they’re maintaining the recommended social distance. The zoning module offers flexibility by configuring any number of zones in any shape or direction. If it detects a social distancing violation in a zone, zone alert lights are triggered, which is a visual indication to alert people. These zone wise violations are relayed to the authorities with a timestamp, using an admin web page. The logs of occurring violations, the total number of people in an area etc, are stored in a database. These are accessible using an admin webpage, which also consists of analytical tools such as graphs for zone-wise, as well as time-wise recording of violations.

The Grand Finale was organized by HCL Technologies in collaboration with Microsoft. The judging and advisory panel for the hackathon included academicians and influencers from institutions such as Johns Hopkins, Cambridge University, Tuck School of Business, International SOS and subject matter experts & thought leaders from Merck and co., Johnson & Johnson, Novartis, Blue Cross Blue Shield North Carolina, and more industry leaders from Life sciences and Healthcare, joined by technology disruptors from HCL and Microsoft. Tanmay and Krisha were two of the youngest finalists. CoVidSpy is an AI-enabled, computer vision based pandemic management system. It is a comprehensive management system to monitor people who face a higher risk of infection, and also alert others about it at a particular time in an area. First, CovidSpy determines whether a person has a fever using an infrared camera and uses an object detection algorithm to identify faces with and without masks, as well as whole people. These modules ensure a person’s entry into the organization. CREDENZ.IN

March 2021

This feature would be useful in say a mall where a particular store sees crowding at a particular point in time. The number of people allowed in that area can be limited by the authorities. People walking in areas, even with the intention to maintain proper social distance sometimes forget, or are unaware of its violation. Hence, a visual indication using a lamp will be provided so that people can take measures to distance themselves from others. CoVidSpy is a fully compiled frontend and backend system which holds immense potential for usage in all corners of pandemic-affected countries. Since the FPS for video processing here is high, and the accuracy reaches up to 95%, there is immense scope for scalability. Hence, this solution can be used in both: large establishments, with multiple cameras, and high computational power, as well as smaller organisations with few resources.

-The Editorial Board ISSUE 16.1

Pg 22


Electrochemical Eye

featured

beyond bionic eyes

I

nnovation is moving at a scarily fast pace. Everyday, we encounter new concepts and technologies designed to slingshot us into a sci-fi future. Innovating something new is a good idea. When done right, it can help people in a lot of ways. The invention of the artificial 3D retina is one such concept.

Scientists have spent decades trying to replicate the structure and clarity of a biological eye, but vision provided through existing prosthetic eyes – largely in the form of spectacles attached with external cables produce poor resolution with 2D flat image sensors. The fact that perovskite, a light-sensitive and conductive material, can be used to draw extremely thin nanowires, several thousand of a millimetre in length, mimics the thin photoreceptor cells that led towards this breakthrough. The spherical shape deduced allows the light to pass through it so that it can hit the curved lens. Hongrui Jiang further added that an image formed after the lens is curved and using a flat sensor will not focus it very sharply.

Scientists at the Hong Kong University of Science and Technology (HKUST) have recently developed the world’s first spherical artificial eye with a 3D retina, claiming that it has more advanced capabilities than human eyes. It is a ground-breaking achievement in the field of medical technology. This new eye is known as Electrochemical Eye (EC eye). It accurately resembles the structure of the natural eye, which can outperform a human eye in terms of sharper vision and detection of infrared radiations in darkness in the coming future with further upgrades.

To solve this problem, the scientists at HKUST deformed a soft aluminium foil into a hemispherical shape. An electrochemical process turned it into an insulator called aluminium oxide, which left the material studded with nanoscale pores. As a result, the scientists were left with a curved hemispherical structure that consists of holes enough to grow perovskite nanowires. The density of nanowires is comparatively much higher than the density of the photoreceptors in human eyes.

Prof. Fan, along with his team, spent nine years completing the study on the electrochemical eye. He says that regardless of image resolution, angle of views, or user-friendliness, the current bionic eyes are still no match to the natural human counterpart. He believes that the newer technology to address problems is an urgent need, which motivated him to start this unconventional project to make an electrochemical eye.

The new 3D retina consists of an array of nanowire light sensors, which imitate the photoreceptors of human retinas. The team at HKUST, led by Dr Fan Zhiyong and Dr Gu Leili from the department of Electronics and Computer Engineering connected nanowire light sensors to a bundle of liquid-metal wires serving as nerves behind the human-made hemispherical retina during the experiment. It successfully channelled the light signals to a computer screen, which showed what the nanowire array could “see”. These nanowires are chosen because they have a higher density than photoreceptors in the human retina, which means the artificial retina can receive a lot more signals and attain higher image quality than the human retina.

Pg 23

ISSUE 16.1

March 2021

CREDENZ.IN


PAR1 (Protease-activated receptor 1) is identified

as the Molecular Switch, which switches off to boost the myelin regeneration. This plays a crucial role in speeding neurological healing in disorders such as multiple sclerosis.

Moreover, experimenting with other materials to boost the sensors’ spectral range and sensitivity of the artificial eye could lead to features such as night vision. All retinas have a blind spot caused by the fact that the bundles of nerves must connect somewhere on the retina to transport information to the brain. This connection point on the retina has no space for photoreceptor cells. Therefore, there is a blind spot. However, its effects can be easily seen if you look up at the stars at night. Find a very dim star and try to look at it directly; it becomes hard to see, but it’s easier to see if you look around it instead.

Each photosensor in the artificial retina can serve as a nanoscale solar cell. With further modifications, the EC-Eye can be a self-powered image sensor, so there is no need for an external power source or circuitry. When used for ocular prosthesis, which will be much more user-friendly when compared with the current technology. The artificial eye will be further improved in the future with the stability, performance, and biocompatibility of the device. The goal of the team is to connect these nanowire light sensors directly to the nerves of visually impaired patients in the future. Further, this device can be used for prosthetics, in electronic devices like cameras, and provide vision to humanoid robots so that they can interact more naturally. In the coming years, this technology will become very practical as it shows promising initial results. The team has spent a total of nine years converting this science fiction concept into certainty. Working on this new technology gives rise to an extraordinary era that can transform the lives of patients with visual impairment. There is still much work to be done, but the concept works and that is very exciting!

On the contrary, the EC-Eye does not have a blind spot. The reason being that the light sensors that now scattered across the entire human-made retina could each feed signals through its liquid-metal wire at the back, thereby eliminating the blind spot issue as they do not have to route through a single spot. The working principle of the artificial eye involves an electrochemical process that is adapted from a type of solar cell, that rests on an aluminium and tungsten membrane and is shaped like a halfsphere. CREDENZ.IN

March 2021

-Bhumika Patidar Pune Institute of Computer Technology

ISSUE 16.1

Pg 24


The Pandemic

panegyric

estimating epidemiological parameters

I

n December 2019, the world witnessed a novel coronavirus outbreak, infectious pneumonia caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). It causes respiratory and intestinal complications in the human body. There is a great deal of examination going on epidemiological cutoff points to stop or lessen the spread of COVID-19. These epidemiological cutoff points determine the influence of the development of severe illness spread and help to categorize if it is a pandemic, an endemic, or an epidemic.

Big data and data mining techniques have always played an important role in determining these points. Various mathematical and machine learning techniques are used to analyze these techniques. Covid-19 data accessed from WHO and national data is used to present decision-making schemes and predict the influence. This data is used to deduce various statistical, analytical, mathematical, and medical parameters. Some of the significant parameters are daily death count, incubation period, environmental parameters, transmission rate, mobility, number of carriers, report time, etc. Reproductive number (R) is the average number of people infected by a diseased individual. There is a possibility of exponential growth in the number of infected people if this parameter is greater than one and abate if less than one. But it was observed that this parameter fails to predict the spread. Although it is a fixed property of the pathogen, it also depends on how people come in contact with one another and can drastically differ across boundaries. Pg 25

March 2021

Because of this reason, reproductive number R is distinguished between two forms, basic reproductive number Ro, which is used to predict the initial spread of the infection, and the effective reproductive number Re, which captures transmission as public health measures are initiated. Re cannot be greater than one. A slight increment in Re could lead to an uncontrollable pandemic situation, which can become difficult for the healthcare systems to handle. The values for Re are deduced from the simulating values of known cases, deaths, and hospitalizations. This is the reason behind its context-dependent and dynamic nature. The final result Reproductive Number (R) depends on the model used, its underlying assumptions, the quality of data it is built with and the context. It is dependent on the transmission rate of the pathogen and the contact rate. These factors may vary according to the properties of the pathogen and the place and time of the outbreak. The value of R is calculated using the SIR model, which uses the probability of infection, the period over which a person is infectious, and the contact rate. It can shape the speed of disease spread and can determine how quickly susceptible people become infected. The fundamental SIR (Suspected Infected Recovered) model was modified by two researchers Kermack and McKendrick to the SIRD (Suspected Infected Recovered Death) model and SEIR (Suspected Exposed Infected Recovered) model for early plague predictions. The SEIR dynamic model uses Particle Swarm Method (PSO) method to estimate parameters. It is a compartmental model depicting the segments of groundbreaking infection. To apply the SEIR model, three parameters were taken into consideration: the product of the people exposed to each day by infected people and the probability of transmission (Beta), the incubation rate (Rate of the latent individual remains symptomatic), and the average rate of recovery or death in the infected population.

ISSUE 16.1

CREDENZ.IN


The Artificial Leaf

uses the method of carbon capture to produce methanol as the end product. Being similar to photosynthesis, this takes carbon dioxide in nature, catalysed by cuprous oxide, to produce this fuel.

The SEIR model is a system depicting how the extent of people in every social event can change after some time. It used the concept of compartmentalizing the population into four possible states: Susceptible states [S], Exposed or latent states [E], Infectious [I], or Removed states [R]. To apply the model, the proportion ratio to determine the rate of change between each ([S] to [E]), ([E] to [I]), and ([I] to [R]) is calculated. Along with this, a Long- Short term Memory Model (LSTM), which is a recurrent neural network (RNN), is used to process and predict the number of new infections over time. One of the important tasks before checking the effectiveness of the model was basic training using the dataset. For that, 2003 SARS Epidemic statistics were fed to the model, incorporating the COVID 19 epidemiological parameters such as the probability of transmission, incubation period, the probability of recovery or death, etc. The model was then optimized with the Adam optimizer and ran for 500 iterations. Due to the insufficient datasets, a simpler level network structure was used to prevent overfitting of the model.

The model compares simulation results with real data and possible measures scenarios of implementation of counter measures. It then differentiates the detected and undetected cases of infection, and different severity of illness, asymptomatic and pausi-symptomatic cases. The SIDARTHE model is a mean-field model in which the average effect of phenomena is taken into consideration for a whole population. The dynamic system of SIDARTHE consists of eight differential equations, which describe the evolution of the population in each stage over a specific period.

Drawing inferences from the contagion duration helps to identify the transmission mechanisms and can also determine the source of infection. Bayesian parameter inference using Markov Chain Monte Carlo (MCMC) methods on the SusceptibleInfected-Recovered (SIR) and Susceptible-ExposedInfected-Recovered (SEIR) epidemiological models are used for this purpose. The model allows us to draw inferences by incorporating the unobserved infection times and latent periods in it.

Epidemiological assessments are beneficial, as they incorporate filtering through information by time, spot, and individual. In this novel strain of coronavirus, the epidemiological parameters and models have played a vital role in studying and researching the outbreak. Thus, early estimations and research can diminish the spread of similar outbreaks in the future.

Agent-based models are used to analyze the movement of individuals. Once incorporated in a model, it can also predict the spread of the disease. This analysis is based on census data, activity surveys, de-identified cell phone location data, and the information from the transportation data. Agent-based models calculate Reproductive Number (R) based on a particular agent, unlike the SIR model, which calculates over the entire population. Statistical techniques can predict an outbreak trajectory based on observed data.

SIDARTHE is a new model obtained by transforming the SEIR model and Dynamic Suspected Exposed Infective Quarantine (D-SEIQ) with machine learning-based parameter optimization, that predicts the course of the pandemic and helps in building effective control strategies. The Model consists of eight stages: susceptible (S), infected (I), diagnosed (D), ailing (A), recognized (R), threatened (T), healed (H), and extinct (E), collectively termed as SIDARTHE. CREDENZ.IN

March 2021

- The Editorial Board ISSUE 16.1

Pg 26


Superconductivity

pansophy

revolutionising energy

S

uperconductivity was observed in mercury metal by scientist Heike Kammerling in 1911. At the superconducting state, the electrons do not suffer any friction. The first frictionless movement of electrons was recorded in mercury metal at the critical temperature of 40 K because the resistivity of metal reduces to zero at the critical temperature (Tc). The superconductivity phenomenon is confirmed by the diamagnetic nature as well, known as the Meissner effect.

At room temperature electrons generally suffer collisions between them as well as with the lattice sites. Therefore, all these materials offer resistance depending on their internal properties and room temperature. Despite having positively charged protons, the nucleus is stable because of shortrange nuclear attractive forces due to the exchange of meson particles between protons and neutrons. This exchange force overcomes the repulsive forces between protons. Nowadays, scientists are researching High Temperature Superconductors. They aim to get a material that represents its superconducting property close to the room temperature and to understand how electron pairing takes place to produce super-efficient electrons for high temperature superconductors. Some scientists proposed the possibility of high temperature superconductivity in an organic polymer sample, which is based on the exciton-mediated electron pairing, as opposed to conventional phonon mediated electron pairing in BCS Theory. Pg 27

March 2021

According to the Hubbard model, in some of the superconducting materials, electrons hop from one lattice site to another lattice site instead of pairing up. The surprising discovery involved the layering of two-dimensional materials molybdenum sulfide with another material called molybdenum carbide. Molybdenum carbide is a known superconductor. Another group of researchers published results on high temperature superconductivity in palladium hydride (PdHx:x>1), suggesting a superconducting transition temperature of 250 K. The superconducting critical temperature increases as the density of hydrogen inside the palladium lattice increases. In the year 2018, researchers noted a possible superconducting phase at 260 K (13 °C) in lanthanum decahydride at elevated (200GPa) pressure. Presently, scientists invented a hightemperature superconducting magnet, namely BSCCO and REBCO, which can provide higher magnetic fields at higher operating temperatures. One of the advanced theories of high temperature superconductivity used for explanation in some superconducting material is excitonmediated electron pairing, which contradicts the conventional theory. At present, scientists are trying to understand the critical theory of high-temperature superconductors and get one material that will show superconductivity close to room temperature with minimum expenditure. By using high-temperature superconductors, high voltage transmission across continents provides intermittency. A superconducting wire carrying a current that cannot diminish would act as a perfect store of energy, which means energy can be captured and stored indefinitely.

-Dr K. C. Nandi Pune Institute of Computer Technology

ISSUE 16.1

CREDENZ.IN


Alumnus of the Year

novellus

Mrs Sujata Kosalge

A

venture of a thousand miles begins with a step, but they are all those little steps that make the journey complete. This section is dedicated to acknowledging the illustrious achievements of professionals who started their journey at PICT and scaled new heights of success.

A distinguished Vice President of Engineering on the advertiser platform team at Google, Mrs Sujata Kosalge graduated from Pune Institute of Computer technology in 1997 with a Bachelor’s degree in Computer Engineering. After graduating from PICT Pune, Mrs Sujata Kosalge joined Persistent Systems. She went on to continue her Master’s in Computer Science from Stanford University. Mrs Sujata Kosalge worked at EdTech Startup Vitalect. Thereafter, she worked at eBay as a staff software engineer in their infrastructure group, where she handled several high performance revenue-generating backend systems. Her Google journey started in the year 2007, when she joined Orkut as a senior software engineer. After a year, she moved to the Google Advertising API team. She grew from a senior software engineer to becoming a distinguished engineer. There she helped and led the team through multiple re-implementation and re-architecting of the system that advertisers use for advertising, which accounts for advertising revenue at Google. CREDENZ.IN

March 2021

Currently, she is the VP of Engineering in the core organization, the team primarily responsible for the technical foundation behind Google’s flagship products. She leads the Data Infrastructure & Analysis (DIA) group, which is responsible for developing an end-to-end data management platform for applications operating at petascale. This data management platform offers a wide range of integrated products for multiple usages inside Google, such as data ingestion, processing, storage, and analysis for its key products, including Google Ads, Google Marketing Platform, Payment, etc. The platform is used for real-time interactive external-user-facing applications, as well as supporting internal analysts’ needs. It comes with a lot of challenges - both technical and organizational. On the technical front, the infrastructure and solutions need to support the diverse requirements of the various products: high Quality Positioning Services (QPS), high data ingestion, low consistency, resource efficiency, privacy, and data trust, etc. Mrs Sujata Kosalge is also a passionate advocate of Google’s diversity efforts and women in engineering. She is co-chair of Women@North America in Google, an organization committed to empowering women throughout the company. The program focuses on identifying and providing an opportunity for students from less privileged backgrounds. She takes pride in her work at Women Techmakers Engineering Fellows where she works closely on an outreach program in India to improve the representation of women in technology.

- The Editorial Board ISSUE 16.1

Pg 28


AI 2020

philomath the ethical conundrum

O

ur world has seen a surge in computational abilities in the past decade, with high performance computing devices bringing unforeseeable speeds to the homes of developers. More computations mean faster math and it, in turn, supports enhanced machine learning development environments. But with great power comes great responsibility, a message that we need to remember now more than ever.

In the wake of the COVID-19 pandemic, the focus of computer vision suddenly started focusing on public surveillance, crowd monitoring, and object detection. Deployed in large scale applications, these systems are continuously collecting data about citizens from CCTV feeds on roads, markets, and tourist hotspots. Currently, we have advanced systems that can monitor thousands of people at the same time. Face detection can capture every single face in public, store it in open databases, and crossreference them with online data banks, social media feeds, government records, and criminal databases! The gait recognition software can detect your feelings. The hidden cameras can constantly map your entire lifestyle. The most invasive devices are our smartphones that are continuously logging the location, images, payments, contacts, and personal messages. The accountability of these organizations remains a major concern. With frequent data leaks, malicious hackers, and data scams it has become increasingly difficult to trust the surveillance systems. Pg 29

March 2021

Although these measures were introduced due to the need of the hour, the implications of these widescale surveillance techniques need to be studied in detail. If we continue to ignore the consequences, AI may soon become the monster that we fear it to be, rather than understanding the underlying problem. The first step of analysis is exploring how this data is being captured. When dealing with masses of data, and no centralized architecture for the surveillance, the security of the system could be easily compromised, which threatens to expose millions of citizens. Furthermore, if private organizations are being employed to carry out this monitoring, their usage must be checked by the authorities. The simple truth is that if left unchecked, the COVID monitoring could soon transform into criminal monitoring which will warp into a dangerous spying operation. The array of deployed applications have already started collecting data about the citizens, without any formal notice. But the biggest concern is the lack of awareness of the public about these devices. The introduction of strict data laws and rigorous checking of companies’ by an approval board, authorized by the government is the need of the hour. The individual CCTVs employed by banks, shopping malls, and private establishments must be forced to declare their intentions, and publicly display that they are monitoring the actions of the public. Wherever surveillance is going on for the protection of citizens, notices must be displayed in all languages, for better awareness and increased reliability. AI is not to be feared, and it does not aim to take over the world. But by letting it infiltrate our public data, manipulated by greed, power, and money, we are allowing it to take a dangerous course that cannot be predicted or reversed until it is too late. -Aboli Marathe Pune Institute of Computer Technology ISSUE 16.1

CREDENZ.IN


AlphaFold

philomath AI for medicine

A

50-year-old grand challenge of biology is solved! A team of scientists, engineers, and machine learning experts, along with the organizers of the long-running Critical Assessment of protein Structure Prediction (CASP) Competition announced an AI that will have a huge impact on the study of medicine. The latest version of DeepMind's AlphaFold is a deep-learning system that can precisely predict the structure of proteins within the width of an atom.

Earlier this year, they predicted several protein structures of the SARS-CoV-2 virus, including OR F3a, which were previously unknown. At CASP14, the structure of another coronavirus protein, ORF8, was predicted. Every two years, the organizers of CASP would release about 100 amino acid sequences for proteins whose shapes have been identified in the lab but not yet made public. Participants predicted protein structures blindly, and these predictions were subsequently compared to the ground truth experimental data when they became available. A folded protein is compared with a spatial graph, where residues are the nodes and edges that connect the residues in nearby proximity. An attention-based neural network system created and trained end-to-end, understands the structure of this graph while thinking logically over the implicit graph that it is building. It uses evolutionarily related sequences, multiple sequence alignment (MSA), and a representation of amino acid residue pairs to refine this graph.

The sequence of amino acids determines the unique 3-dimensional structure and the specific function of each protein. The efforts taken to develop a vaccine is dependent on the virus spike protein. The virus damages the human cells according to the shape of the protein. The spread is just one protein among billions across all living things; there are lakhs of different types of proteins present inside the human body alone.

CASP got a push when DeepMind entered the competition in 2018 with its first version of AlphaFold. It left other computational techniques in the dust, although it still could not match the lab accuracy. Many researchers adopted new ways that were similar to AlphaFold. Now, more than half of the entries use some form of deep learning in their methods.

The system used by CASP to measure the accuracy of predictions is the Global Distance Test (GDT). GDT is the percentage of amino acid residues within a threshold distance from the correct position. The AlphaFold has an accuracy of 92.4%. With an error of 1.6 angstroms, it can find a protein shape and structure in a few days, helping researchers design new drugs and understand diseases. It will solve many problems like increase crop yield, make plants nutritious, enzymes that digest waste, etc.

Pg 30

March 2021

There is still much to learn, including how multiple proteins form complexes, how they interact with DNA, RNA, or small molecules, and how we can determine the precise location of all amino acid side chains. The progress gives confidence that AI will become one of humanity's most useful tools in expanding the bounds of scientific knowledge.

- Tanvi Mane Pune Institute of Computer Technology

ISSUE 16.1

CREDENZ.IN


Edge Computing

philomath

the descent of cloud?

W

e live in an era where data is the new oil of the digital economy, reflected in the digital footprint we are leaving. With the plethora of data generated, there comes the need for storing and processing it. Thus, it gives rise to cloud computing. However, an exponential rate of increase in data generation has led to high latency issues and deficit bandwidth when it comes to processing all that data. Thus, more versatile computing architecture is the need of the present, which can cater to all our current processing needs while proving not only to be faster in terms of speed but also more cost-effective. Edge Computing seems to do justice in all these aspects.

Edge Computing is a distributed paradigm that brings data storage and processing closer to the end-user, thus having an edge. It is an ecosystem in which the processing of real-time data takes place along the communication path via a decentralized processing infrastructure, i.e. at the edge of the network, hence the name. In contrast, cloud computing employs data processing through proprietary data centers, which is inefficient while computing real-time data. It leads to higher latency, as processing takes place at a fixed location only. Edge computing hardware and services help resolve this issue by serving as a local source of pro edge gateway. It can process data from an edge device and then relay only the relevant data back through the cloud.

CREDENZ.IN

March 2021

Thus, it sends back data to the edge device in real-time applications and reduces bandwidth needs. An edge device may include a wide variety of devices, such as an IoT sensor, a notebook computer, the latest smartphone, a security camera, or even an internet-connected microwave oven in the office break room. Edge gateways themselves are considered edge devices within an edge-computing infrastructure. This computing paradigm was specifically developed to tackle the network traffic due to the exponential growth of IoT devices in our daily lives. The term itself was coined in the 90s, with the company, Akamai, launching its Content Delivery Network (CDN). The concept CDN employed was to introduce nodes at locations geographically closer to the end-user for content delivery, thus taking advantage of node-caching for providing optimal data retrieval. Since then, edge computing has been ever-evolving. With recent developments in the field, it is poised to replace cloud computing in various applications soon. The reasons for this notion are innumerous. But why would one favor the edge architecture over the current IoT ecosystem? Aren't we seem to be doing just fine with pure cloud? The biggest motivation would be its ability to process and store data faster, thus enabling more efficient real-time applications that may be critical in many tasks. Another aspect is data security; a single instance of security breach will not compromise a large amount of a company’s data, owing to the dilution and decentralization of data storage. Also, the deployment of IoT devices in remote areas can be made possible through this. It reduces the dependence on internet connectivity, which is a factor that would hugely impact the world by making it more connected. Furthermore, the cost of a cloud-based ecosystem can also be reduced by introducing edge-computing as the first instance of processing into the existing ecosystem, thereby reducing the cost of operating centralized data centers. ISSUE 16.1

Pg 31


AlphaFold AI for Medicine

Jet Propulsion, a prototype device that uses microwave air

plasmas for propulsion by compressing air into high pressures and use a microwave to ionize the pressurized air stream. It is a potentially viable alternative to the conventional fossil fuel jet fuel.

philomath

Companies that have embraced the cloud for many of their applications have now realized that the cost for bandwidths can be reduced to a great extent to the company’s gain. In this coming age of the smart-device ecosystem, a variety of use cases can be identified for the host of edge, the majority being time-critical. The two largest fields that Edge-based systems can revolutionize are Autonomous vehicles and Health Analytics.

However, none pose as an effective solution in the current scenario. Edge-based healthmonitoring devices would not only provide real-time diagnosis efficiently but also improve responses towards health emergencies in the future via data analytics. Especially for monitoring a person’s current condition, this seems to be the best solution, further ensuring social distancing as an additional measure. To answer the burning question at hand: Will Edge replaces the Cloud paradigm in subsequent time? While some might speculate that Edge computing would completely replace Cloud computing in the future, that definitely won’t be the scenario as both seem to have different strengths.

According to Brian Krzanich, CEO of Intel, an averagely driven autonomous car generates roughly 4,000 GB of data per day, which needs processing in real-time. Not doing it efficiently may have disastrous consequences, and hence, requires the use of an edge-computing infrastructure, to further advance self-driving capabilities. Also, doing all the processing onboard isn’t ideal at all. It requires a lot of computing power, which a standalone system cannot generate. With more vehicles getting autonomous by the day, the need for efficient and fast processing systems is evergrowing. Thus, we can safely assume that this is where the future of autopilot systems, if not all vehicles, is headed.

Edge computing primarily focuses on the processing of data quickly, and cloud computing on data storage while being easily scalable, owing to the centralized processing infrastructure. Thus, each have their unique applications. Also, the cost of migration to new technology is quite substantial. Doing so renders the current infrastructure a waste of resources. Edge will complement cloud computing, if not replace it, thus improving the IoT landscape in the coming future.

One of the large domains, where edge computing can be deployed, is the healthcare sector. There is a need for quick diagnostics and monitoring of health data for reducing the workload on the existing healthcare infrastructure. The world has seen cloud-based IoT devices in this sector in the past. Pg 32

March 2021

-Anupam Patil Pune Institute of Computer Technology ISSUE 16.1

CREDENZ.IN


Hi-tech Triplet

philomath

the nexus of new technology

O

n the verge of the next Industrial Revolution, the combination of Blockchain, IoT devices, and AI forms a robust trio that will change some perspectives soon. IOT fetches the data from the source by acting as an independent authority. Blockchain provides the infrastructure for the data to interact with different systems at different steps and AI optimizes the complete journey by using pattern detection algorithms and data analysis. This disruptive combination has the ability for the massive automation of processes.

Blockchain increases transparency, trust, and privacy in terms of asset storage. IoT drives the automation industry through autonomous agents, which act as an independent authority. It also leverages AI and Data Analysis. Data is collected from the IoT devices in the form of audio, video or images. This data can be processed by smart IoT devices, which have processors of their own. This partially processed data follows the consensus protocols of their Blockchain ecosystem and the path directed by it. These Blockchain protocols in a smart contract ensure that data is represented in the decentralized ledger, which can only be accessed by the eligible authorities. Some checker functions and machine learning algorithms are generally used for detecting patterns and optimizing the outcomes. This data can be used by other smart devices that interact with each other based on the smart contract protocols. Data is stored with a ledger technology where the ledger is distributed between different agents. CREDENZ.IN

March 2021

In a traditional Distributed system, the ledger is distributed between people. But we can leverage AI and smart devices to replace human interference completely. This complete ecosystem can run smoothly, provided that the agents act according to the consensus and the underlined protocols. In logistics, the company can use this trio to reap the most benefit. The complete journey of a single product right from the retailer to the customer can be tracked and monitored by these smart agents, which use smart contracts in a completely secured way. A supply chain network involves many stakeholders such as brokers, raw material providers, etc. Thus complicating the end-to-end visibility. Due to the involvement of multiple stakeholders, delivery delays have become the biggest challenge. These companies are working on making the vehicles IoT-enabled in order to track the movement throughout the shipment process. Due to the lack of transparency and complications in the current supply chain and logistics, the combination of Blockchain and IoT can help to enhance the reliability and traceability of the network. IoT sensors like motion sensors, GPS, temperature sensors, vehicle information, or connected devices provide crisp details about the status of shipments. Blockchain stores the sensor information. Once the data is saved, stakeholders listed in the Smart Contracts get access to the information in real-time. All these technologies are already present and researched. Only the large-scale application is to follow. For example, the crypto industry has already seen what blockchain brings to the table, transformation of the manufacturing industry by IoT and AI for analytics and predictions. All of this indicates towards the combination of these technologies might not be too far in the future.

-Hrishikesh Ambekar Pune Institute of Computer Technology ISSUE 16.1

Pg 33


Neuralink

philomath

extending human capabilities

N

euralink is a project undertaken by Elon Musk that focuses on research on the Brain and Neurons. Neuralink has promised to develop products that can help humans communicate using telepathy and eradicate diseases such as blindness, paralysis, deafness, and mental illness. This article presents updates on Neuralink’s progress, discussed at the Neuralink conference on August 30, 2020.

A lot of people have been diagnosed with neurological problems. A generalized device is needed to tackle these problems, which is affordable and reliable. Elon emphasizes the importance of significant and minor brain problems such as strokes, memory losses, brain damage, and insomnia. He believes that if we locate and correct electric signals generated by the neurons, we can heal many brain related problems. The Utah Array contains a bed of rigid spikes, which is 100 channels per array, and it is inserted with an air hammer. Once inserted, big wires and boxes come out of the head, which results in the risk of getting an infection. The most significant disadvantage is that this procedure is expensive.

It also gives the user critical information about the body, like temperature, detection of strokes, and predicts heart attack occurred in the body. It has an all-day battery life, and the best feature of V 0.9 is that its size can be as thin as human hair. One of the essential processes of getting the V0.9 implanted in the brain is to attach a link. Elon proposed to get this done by having a device with the electrode attached to it. An advanced robot can achieve the level of precision of this work. The robot only takes an hour for the surgery and does not require anesthetics. Hence, to build these robots on a large scale, Elon proposed to hire various talents related to robotics and surgery to achieve these objectives. The procedure starts with selecting a location to implant the link. The robot then attaches the link to V0.9 and implants it on the selected portion of the brain. Once V0.9 is implanted, it can detect heart rating, pressure rating and can also stream music. Dr. John Krakauer, a neuroscientist at Johns Hopkins school of medical science, claimed that we are at an immature phase while fulfilling Elon Musk’s ambitions to stream music into the human brain and many other functions proposed by him. For Krakauer and many others, non-invasive treatments such as animation-based immersive behavioral experiences for neural restoration and neuro rehabilitation are important. Adam Marblestone, a neuroscience theorist at Google’s DeepMind, summed things up by comparing Neuralink to a well-equipped mountaineering squad that still has to climb the mountain.

Last year they could manage to attach the device to the skull, but it had long and complicated wires, which were hard to maintain. It also proved to be an expensive process, just like the conventional method. Hence, Elon came up with a model similar to that of Fitbit. The wireless V0.9 can now fit well into the skull and the flush attached is invisible. CREDENZ.IN

March 2021

-Ananyay Ankit Manipal Institute of Technology

ISSUE 16.1

Pg 34


Self-Driving Cars

philomath

redefining vehicles

S

elf-Driving vehicles represent a fast-paced field of modern technology, extending transportation capacities. Autonomous vehicles have the potential to change the transportation industry to a huge extent. It is also known as a driverless car, a self-driving car, an unmanned vehicle, or a robot car. The advent of self-driving autonomous vehicles has made it possible to reduce dependence on labor and reduce accidents. Autonomous vehicles can guide themselves without human command and control.

They can even perceive their surroundings, detect obstacles, track, and commute to a destination using a combination of precise sensors, cameras, and radars. One of the interesting systems, these advanced control systems can interpret the information provided by sensors that detect obstacles and choose the most appropriate navigation path for the vehicle. There has been enormous research carried out by professionals to bring out the idea of autonomous vehicles more conveniently into concrete reality. The field of autonomous vehicles has recently picked up the attention of various tech giants. Due to the different applications of autonomous vehicles, particularly in route planning, designing real-time obstacle avoidance systems, and pathfollowing, autonomous vehicles have become the foundation of controlling vehicles in obscure conditions. Hence, effective collision avoidance and path-following strategy must create an astute and powerful autonomous vehicle. CREDENZ.IN

March 2021

Intelligent Transportation Systems exploration raises a specific enthusiasm as it tends to be testing issues of independence and security in complex conditions. The fundamental and required strides for making an autonomous vehicle are Perceptions, Planning, and Control. Perception consists of Environment modeling and Localization. It individually depends on exteroceptive and proprioceptive sensors. Planning intends to produce an ideal trajectory dependent on the data conveyed by the perception, which requests to arrive at a given objective. The Control module is committed to following the produced trajectory by instructing the vehicle's actuators. The Perception module guarantees a portrayal of the climate as indicated by a precise grid portrayal. Utilizing Occupancy Grid Maps (OGM) is advantageous for deterrent shirking since it permits the distinction of the safe space and to find static and dynamic items in the scene. The postures of the objects to be avoided are then utilized at the path planning level, which produces a trajectory and a speed profile as per a defined sigmoid function and a rolling horizon. The acquired curvature profile is considered as a perspective way for the trajectory control module. This level gives the directing point to the vehicle as per a parallel trajectory regulator utilizing the Centre of percussion (CoP) rather than the exemplary center of gravity. The proposed regulator depends on a feed-forward and strong state-criticism activity to individually lessen the effect of the unsettling influence on the parallel error and ensure sidelong steadiness. Obstacle Avoidance Strategy: 1) Perceiving the environment accurately and proficiently is mandatory for an autonomous vehicle. This investigation predominantly focuses on the environmental perception to remove areas of static/unique items just as the drivable ways dependent on exteroceptive sensors.

ISSUE 16.1

Pg 35


Silq is the first high-level programming language, which allows a programmer to solve problems without actually knowing the functionality of a quantum computer. It solves complex tasks with less code with the facility of erasing values that are not needed.

The localization is avoided as the situation of the vehicle is viewed as known and static. One of the most utilized methodologies for separating data obtained from the street is the Occupancy Grid (OG). It may be utilized for a few applications like collision avoidance, sensor combination, object following, and Simultaneous Localization And Mapping (SLAM).

3) The control module consists of two primary parts: longitudinal and lateral controllers, guaranteeing mechanized driving trajectory. The principle center is given around the lateral controller to manage obstacle avoidance. The lateral controller gives a proper steering angle to follow the ideal way provided by the reference generation module. The ideal way can be accomplished by decreasing the lateral error, and the trajectory error. Among the mathematical and dynamic lateral trajectory procedures, the dynamic methodology, dependent on the Centre of Percussion (CoP), is utilized here. This decision depends on the presentation of this control technique. The CoP is a mathematical point situated before the Centre of Gravity (CoG) of the vehicle permitting the lateral position error. Then a better trajectory tracking can be expected. Then again, since the movement of the CoP is decoupled from the back tire lateral powers, the lateral unique conditions become less complex.

2) The Reference Generation Module is devoted to the meaning of the trajectory and the comparing speed profile trailed by the vehicle. The organizer gets the drivable zones and the obstruction positions from the perception module. From this data, a mathematical trajectory, just as the speed profile, can be created. This part aims to provide a nominal trajectory from the starting point to the final point based on the perceived drivable zones. When recognizing an obstacle, a subsequent trajectory (obstacle avoidance trajectory) is determined to guarantee the well-being and the solace of the independent vehicle travelers and join the nominal trajectory after the avoidance. The avoidance trajectory is acquired through nearby planning since it concerns a small part of the nominal trajectory. The moving skyline technique is applied to reduce the computational expenses of the trajectory generation calculation. At that point, these trajectories (nominal and obstacle avoidance) can be considered as references for the control module, predominantly the horizontal regulator.

Pg 36

March 2021

It is a high-level overview of a dynamic obstacle avoidance scheme, which is based on three levels of perception, route planning, and control guidance. An evidential occupancy grid can be used for dynamic obstacle detection, and a sigmoid function can be used to generate an accurate trajectory to avoid this obstacle. This generated trajectory is to be followed by the vehicle through a lateral control strategy at the Centre of Percussion. It is a simple way of avoiding obstacles, but research is still in progress for complex scenarios.

-Kshitij Kapadni Pune Institute of Computer Technology ISSUE 16.1

CREDENZ.IN


DEEP-Dig

philomath blocking hackers precisely

C

ybersecurity issues have become a day-today struggle for many companies and are a huge concern for businesses worldwide due to the ever-increasing number of cyberattacks. In 2018, 62% of businesses experienced phishing and social engineering attacks, while 51% of them experienced denial of service attacks. While overall ransomware infections were down 52%, enterprise infections were up by 12%. Over 500 million consumers, dating back to 2014, had their information compromised in the MarriottStarwood data breach made public in 2018.

Such malicious activities often leave tell-tale traces that can be identified even when the underlying exploited vulnerabilities are unknown to defenders. Therefore, today’s IDSes prevent intrusions by comparing the incoming traffic with a preexisting database of known attack patterns known as signatures or by creating a model simulating regular activity and then comparing new behavior with the existing model. When an intrusion is detected, these systems respond by rejecting them or blocking them as quickly and decisively as possible but aborted cyber attacks are missed learning opportunities for intrusion detection. Although the method is promising, its advancement has been hindered by the following limitations: 1. Scarcity of realistic, recent, and publicly available cyber attack data sets. Since non-attack or regularactivity data is far more plentiful than realistic, current attack data, many IDSes must be trained almost entirely from the former. It makes the system less reliable.

It is estimated that data breaches exposed 4.1 billion records in the first half of 2019 with the average time to identify it being 206 days. Each data breach had an average cost of around $3.92 million. Therefore, protecting personal information from such attacks is the need of the hour. Intrusion detection is an important means of mitigating threats. Detecting cyberattacks before they reach vulnerable systems has become a vital necessity for many organizations. It is done today by using machine learning-based Intrusion Detection Systems (IDSes). IDSes are based on the observation that most of the cyber attacks often share similar traits, such as the steps intruders take to open back doors, execute files and commands, alter system configurations, and transmit gathered information from compromised machines, etc. CREDENZ.IN

March 2021

2. Feature extraction is another hindrance to further development. The task of appropriately selecting features that generate the most distinguishing intrusion patterns often limits building effective models since it demands manual analysis aided by expert knowledge. 3. Encrypted packets are not processed by most of the IDSes. Therefore, attackers might benefit from encrypting their malicious code and making it harder for IDSes to detect attacks. These limitations call for a newer and more accurate approach to intrusion detection. A new approach has been put forward in the research paper titled “Improving Intrusion Detectors by Crook-sourcing” written by researchers at University of Texas Dallas. The following information presented in the article grabs the key concepts and points from the same.

ISSUE 16.1

Pg 37


Xenobot

is the first living and programmable machine exclusively made from the original stem cells of an African frog, Xenopus laevis. Designed using computer models, these living tissues are self-healing and can move independently.

Rather than aborting detected attacks, a system can prolong the attackers’ interaction with a decoy version of the vulnerable systems to maximize the harvest of useful threat intelligence. It can be done by reconceptualizing software security patches as feature extraction engines. This deception-based methodology views attackers as free penetration testers and digs for living, up-to-date, and labeled data streams to train IDSes and enhance their ability to detect intrusions. Hence the name, Deception Digging (DEEP-Dig). Traditional deception-based approaches involve the use of honeypots. A honeypot is a computer or computer system intended to mimic likely targets of cyberattacks. Since honeypots use applications and data that simulate a real computer system, attackers are lured into these traps just as a mouse is lured into a cheese-bait trap. Unfortunately, these honeypots have limited training value since they may mistrain the machine to detect attacks only against honeypots instead of the actual system. Moreover, an experienced hacker can quickly tell if they are attacking a honeypot or the actual system and know if their hacking attempts have succeeded or failed.

These decoy environments are also called containers. A container holds an attacker session until the session is deliberately closed by the attacker, the connections keep-alive time expires, the container crashes or a session timeout is reached. The monitored raw data is labeled and sent for feature extraction, which selects and groups relevant and non-redundant features, i.e. audit streams and attack streams. Then these grouped features are queued for updating the model. The model is then used to augment the IDS’s capability to detect malicious activity in the runtime environment accurately. The implementation of the DEEP-Dig methodology results in the effortless labeling of the data and supports a new generation of higher accuracy detection models, ensuring the chances of reduction of intrusions to a greater extent.

Honey-patch overcomes the limitation of honeypots. A honey-patch is a reformulation of a broad class of security patches such that it avoids alerting the attackers when their hacking attempts fail. The patches are reformulated by replacing their attack-rejection code with a code that forks the attackers’ connection to a decoy environment. In other words, this approach retains the securitycheck code but replaces the redemption code with a forking code. When an attack is detected, these honey-patches transparently and effectively redirect the attacker into an unpatched decoy environment. This environment is aggressively monitored by software to collect information regarding the steps, which, an attacker takes to exploit a vulnerability.

CREDENZ.IN

March 2021

- Neil Deshpande Pune Institute of Computer Technology ISSUE 16.1

Pg 38


Pharmacogenomics

philomath

advancing personalised medicine

D

ifferent drugs benefit different patients, for the sweeter ones do not benefit everyone, nor do the astringent ones, nor are all the patients able to drink the same things - Disease III, Hippocrates

This reflects the potency of the five-century old divination of Hippocrates even today, in the form of, what we call today as ‘personalized medicine’. The fact that only the American soldiers of African origin acquired hemolytic disease after treatment with the antimalarial drug primaquine in the Second World War indicated the role of the genetic makeup of a person in response to a drug. Pharmacogenetics became a recognized science in the late 1950s and got into the big picture as a result of a series of discoveries in and around the field of drug metabolism and toxicity. Cytochrome P450 (CYP) are enzymes that metabolize endogenous compounds such as steroid hormones, fatty acids, and xenobiotics, including drugs and carcinogens. These are highly expressed in the liver and are found to be present in the lung, kidneys, and small intestine. The CYP2D6 polymorphism in 1970 leads to pharmacogenetic-based variations in pharmacokinetics which formed the basis of further research in Pharmacogenomics.

The polymorphisms or DNA/genetic variations in individuals result in differences in the efficacy and safety of the drugs. These genetic variations mostly belong to SNPs (Single Nucleotide Polymorphism), Gene Deletion, VNTRs (Variable Number of Tandem Repeats), and CNV (Copy Number Variant), which further lead to four main types of phenotypes associated with drugmetabolism in individuals. Normal metabolizers do not have genetic alterations to impact drug metabolism. Generally, ultrarapid metabolizers (UM) have two or more copies of an allele of a gene that enhances the metabolic extent of an enzyme; intermediate metabolizers (IM) or reduced function transporters have one or two copies that reduces the extent to which the drug can be transported or metabolised, and poor metabolizers (PM) or poor function transporters generally have two copies of an allele of a gene that results in little or no ability to metabolize or transport a drug. For example, individuals with different genotypes, taking opioid analgesic drug codeine, which is metabolised by CYP2D6, significantly show different effects. If a PM takes Codeine (prodrug), she/he is unable to convert it into its active form to cause analgesia. If the patient is an RM or UM, they can encounter a higher level of addiction, sedation, and other systemic adverse effects at lower doses than normal metabolizers. Multiple studies have indicated an increasing number of such polymorphic genes (CYP2CI9, CYP3A4, CYP1A2, etc.) associated with variable drug response at the receptor, enzyme, or transporter level.

Pharmacogenomics is the science of the genetic basis of inter-individual diversity in the consequences produced by the administration of a drug. It is primarily concerned with DNA variations in human germ cells, except for genomics-guided cancer treatment (somatic tumours).

PGx testing, also known as drug-gene testing, is an enhanced way of testing, which accounts for an individual’s genetic factors as contrary to the population-based inferences. These tests vary in class and composition, presenting challenges in relevant tests and selection. PGx test is specific to a particular medication and is not available for widely used drugs like aspirin and many over-thecounter analgesics. Prevention is better than cure; hence the preemptive test is preferred over reactive testing to provide proper direction to the therapy.

CREDENZ.IN

ISSUE 16.1

March 2021

Pg 39


Quantum dot laser diode is a colloidal quantum dot,

which can potentially hold the emitted light on the order of 50 nanometers across. An integrated optical resonator allows it to function as a low threshold, optically- pumped laser.

Once the genotype of the enzyme and metabolizer class of an individual is defined, searching the PharmGKB, an open-access online knowledge database, concerning the impact of human genetic alteration on drug response, can be a vital first step; thereafter planning a personalized drug therapy. The focus is shifting towards using known genetic information rather than concerning while collecting the information. Today, PGx guidance can be availed in the areas of psychiatry, pain management, cardiology, gastroenterology, neurology, infectious diseases, psychiatry, rheumatology, oncology, endocrinology, and anesthesiology. PGx has been used to predict personalized drug dose for a patient, inadequacy of response to a drug, and individuals at serious risk of drug toxicity. PGx testing was first approved by the FDA in 2005 and employed for alleles in CYP2D6 and CYP2C19, both belonging to the Cytochrome P450 superfamily of major drug-metabolizing enzymes in the Liver. Since then, the number of clinical pharmacogenetic tests has steadily increased, accompanying the evolving knowledge of genes’ functions in drug response. PGx has found applications in bioinformatics, regulatory science (product life cycle), clinical trials (pharmacokinetics, efficacy, and safety, etc.). There have been various cases in 2018, where warning letters were issued to companies marketing pharmacogenetic tests (not-reviewed) with false claims of drug-response predictions. Currently, PGx initiatives have been launched in the USA, Asia, and Europe but the test field is still in its initial stage of clinical development and approval.

Pharmacogenomics soon seems to open the doors for collaboration with the fields of health economics, transcriptomics, epigenomics, metabolomics, and proteomics information. With the advent of the data-driven world, health informatics solutions will possess fewer challenges associated with big data storage, mapping, quality check, and efficient collaboration with systems. With the growing understanding of Pharmacogenomics in clinical practice, Health Economics and Outcomes Research (HEOR) can be foreseen to prosper as well. Currently, the implementation of PGx tests has been moderate because of the scarcity of robust data manifesting clinical efficacy. A resource-constrained nation like India will initially have to struggle for supplies, foundation, skills, and potential building. Due to which personalized drug therapy will take a longer time to become publicly accessible. PGx is necessary because of the dynamics of the environment, which elicits evolution in such a way that today, each individual of a species, which ages ago, used to be a community at most; is possessing some very unique variations to heighten its survival on this vibrant planet. Today, we are trying to adapt to the tapering effect of the idiosyncratic human population with the kind descent of Pharmacogenomics.

The Pharmaceutical world is moving towards the peak of customization. A tapering approach in therapy from the observation that advancements began on a level where a functional group of drugs (NCEs) was modified to suit a mass disease. Today, we aim at studying the genomics of a person to come up with a subjective customized therapy to enhance disease management. CREDENZ.IN

March 2021

-Sakshi Kasat Bombay College of Pharmacy, Mumbai ISSUE 16.1

Pg 40


Langasite

philomath a promising phenomenon

I

t is well-known that the electrical properties of some crystals can be influenced by magnetic fields, and vice versa. The ‘magnetoelectric’ coupling of magnetic and electrical properties is a more general and widespread phenomenon. It was shown back in 1888 that dielectric material moving through an electric field would become magnetized. Although the work in this area can be traced back to pioneering research in the 1950s and 1960s, there has been a recent resurgence of interest driven by long-term technological aspirations. It plays an important role in certain types of sensors or data storage.

Polarization happens when the opposite charges in a crystal are displaced and aligned when an electric field is applied. And now, due to the magnetoelectric effect, it is also possible to use a magnetic field. The stronger is the magnetic field, the stronger is electrical polarization. A team of researchers from Austria, Russia, and the Netherlands reported in a paper in npj Quantum Materials that the relationship between electricity and magnetism is even more complicated. TU Wein, a physicist from the Vienna University of Technology, stated that the coupling electrical and magnetic properties of the crystal depend on the crystal’s internal symmetry.

If the direction of the magnetic field was changed a little, the polarization tipped over. The electrical polarization could be changed to a completely different state by a small rotation in the magnetic field. If the crystal has a high degree of symmetry, for example, if one side of the crystal is exactly the mirror image of the other side, then for theoretical reasons there can be no magnetoelectric effect. This discovery is amazing because the langasite crystal is so symmetrical that it should not allow any magnetoelectric effect. It eliminates the possibility of a link between magnetism and electrical power. This crystal was not only able to produce a magnetoelectric effect but also an effect that was never seen before. Weaker magnetic fields showed no coupling. But on increasing strength, it occurred that the holmium atoms changed their quantum state and gained a magnetic moment breaking the internal symmetry of the crystal. The crystal was still symmetrical geometrically and what broke the symmetry was the magnetism of atoms. It may look like a great deal. However, there are real-world applications in regards to conserving and saving computer system information. It can be used in magnetic memories such as computer hard disks, memory chips, etc. They are generated with magnetic coils, which require a relatively large amount of energy and time. If there were a direct way to switch the magnetic properties of solidstate memory with an electric field, this would be a breakthrough. In the coming days, we won't deny that there can be new and promising storage ways in magnetic memories and few sensors.

After examining and closely observing the langasite crystal, made of lanthanum, gallium, silicon, and oxygen, doped with holmium atoms, it was observed that the relationship between polarization and the direction of the magnetic field is strongly non-linear. Pg 41

March 2021

-Neha Tarte Pune Institute of Computer Technology ISSUE 16.1

CREDENZ.IN


Space Travel

philomath an inevitable future

H

umans are the creatures responsible for completely revolutionizing the planet Earth. They developed buildings from rocks, went from writing on leaves to storing data in DNA, and created satellites that can orbit the planet, starting from absolutely nothing.

Voyager is a tiny spaceship that is leaving our Solar System and entering into deep space. It is the first human-made object ever to do so. It is one of the greatest achievements of humankind. NASA is also preparing an additional mission, the upcoming Interstellar Mapping and Acceleration Probe (IMAP), due to launch in 2024 to capitalize on the Voyagers’ observations. Apart from Spacecrafts, humans have built numerous observatories that monitor and collect information like telescopes and LIGO, which are gravitational-wave observatories that opened a whole new dimension about space by discovering Gravitational Waves.

Space has zillions of terabytes of data generated every second, which is way beyond what any human can ever understand. We have multiple space rovers, spacecraft, and satellites collecting data every second. A space probe is a robotic spacecraft that does not orbit around the Earth. It can travel through interplanetary space, land on or orbit other planetary bodies, enter interstellar space, and send data for scientific studies. One of the most groundbreaking space probes called Voyager has been the farthest in the universe that we have explored. The twin Voyager 1 and 2 are two robotic space probes that managed to enter Interstellar Space. The Voyagers together provide a detailed report on how the heliosphere interacts with the constant interstellar wind flowing from beyond. Data from NASA’s Interstellar Boundary Explorer (IBEX), a mission that is sensing that boundary remotely, complements the data sent by the Voyagers. Voyager 2 is the only spacecraft to study all four solar system giant planets, namely Jupyter, Saturn, Uranus, and Neptune, closely. It managed to discover the fourteenth moon at Jupiter, ten new moons and two rings at Uranus, five moons, made object to fly by Neptune. CREDENZ.IN

March 2021

NASA scientists say that we are likely to find alien life in the next ten years. In the next ten years, NASA plans to launch a rover to collect rock samples on Mars, two spacecraft to visit distant ocean worlds, and new space telescopes to study planets outside our solar system. Any of those missions could find signs of extraterrestrial life. Due to the insane growth in computing capabilities and algorithmic efficiency, the speed of space exploration will experience a tremendous boost in the next decade. It is not long before we will have to unite as a planet to deal with the life forms we find. The AI that we fear for leading to our destruction, one day may help us in fighting extraterrestrial lifeforms and understanding the laws of the universe beyond our present scope.

-Yash Sonar Pune Institute of Computer Technology ISSUE 16.1

Pg 42


Blockchain

philomath the fuel of industry 4.0

W

ith the emergence of new technologies and the upcoming fourth Industrial Revolution known popularly as Industry 4.0; the factory environment's traditional methods are changing rapidly. Various ways are being implemented to incorporate these disruptive technologies in the factory environment. One such technology is Blockchain, which aims at integrating the heterogeneity of systems, managing the commercial transactions, and nurture the assets' traceability.

There is a plethora of information and data everywhere, and a large number of data is exchanged every day. However, due to this exchange over the internet, maintaining confidentiality, privacy, and integrity has become a significant concern in Industry 4.0. Moreover, according to the surveys conducted by different agencies, nearly 60 million people are affected by identity theft, and 12 billion people's records are misused in 2018 and expected to increase to 33 billion by 2023. Security and privacy of information are essential concerns in Industry 4.0. To mitigate these threats, Blockchain can prove a great asset. Blockchain technology has the potential to handle various security attacks as it can eliminate the necessity of having a centralized system to perform operations.

Blockchain is a great way to handle and manage transactions. There are a few more areas where Blockchain may add value to Industry 4.0 soon. Blockchain can help in managing and quantifying quality problems with a higher level of specificity. The areas that are even more specific to Industry 4.0 for blockchain applications add the possibility to more readily track necessary management and product information for a market that requires even more traceability, such as healthcare or military products. Blockchains can be constructed with the new data collected by cameras and sensors. Hence, more information could be managed in a short amount of time. An example of this revolutionized Industry application based on Blockchain includes the newly digitized supply chain network. Blockchain technology acts as the network's core infrastructure. It is endowed and empowered by the Internet of Things (IoT), which provides fast connectivity, sophisticated data-gathering, and high-performance analytical capabilities. Tracking appliances and smart sensors are also delivered by the IoT environment that connects physical objects with data, enabling more effective manufacturing processes, more intelligent supply chains, and new business ecosystems. The Internet, Social Networking, and e-Commerce led the third industrial revolution. Emerging and cutting-edge technologies such as IoT, Cloud, AI, Robotics, and Blockchain lead Industry 4.0 by securing trust, transferring value, and storing and maintaining data integrity. Blockchain will automate processes and soon replace manual activities. Blockchain technology will become an anchor and mark a tremendous change in the realms of Industry 4.0.

In Blockchain technology, several users participate in verifying and validating the transaction. It uses a distributed database that stores data from all nodes in an encrypted format and implements various check systems to validate it. Pg 43

March 2021

-Tanmay Kulkarni Kolhapur Institute of Technology ISSUE 16.1

CREDENZ.IN


PISB Office Bearers 2020-2021 Chairperson:

Hritik Zutshi

Vice Chairperson:

Onkar Litke

Treasurer:

Hemang Pandit

Vice Treasurer:

Rohan Pawar

Secretary:

Bhushan Chougule Shreya Lanjewar

Joint Secretary:

Aniket Kulkarni Durvesh Malpure Harmandeep Singh

Secretary of Finance:

Saket Gupta

VNL Head:

Saurabh Shastri

VNL Team:

Saket Gupta

PRO Head:

Sreya Patranabish

PRO Team:

Garvita Jain

Design Head:

Bhushan Chougule Nishita Pali Rashmi Venkateshwaran Saurabh Shastri

P.I.N.G. Team:

Anushka Mali Kiransingh Pal Paresh Shrikhande Rutuja Kawade

Webmaster:

Ajay Kadam

Web Team:

Fatema Katawala Omkar Dabir Sakshee Phade Shreya Deshpande

App Head:

Ritesh Badaan Siddharth Patil

App Team:

Atharva Saraf Durvesh Malpure Tanuj Agrawal Vaibhav Pallod

Programming Head:

Ajay Kadam Kapil Mirchandani Kunal Chaddha Kushal Chordiya Saumitra Kulkarni Tanmay Nale

Programming Team:

Aboli Marathe Gaurav Ghati Kaustubh Odak Pranjal Newalkar Tanmay Pardeshi Yash Sonar

WIE Chair:

Pallavi Dadape Rashmi Venkateshwaran Shreya Lanjewar

WIE Secretary:

Garvita Jain

Digvijay Chaudhari Kiransingh Pal Maahi Singh Shreya Deshpande

Marketing Head:

Neelanjney Pillarisetty

Marketing Team:

Aniruddha Garje Eesha Kulkarni Durvesh Malpure Rohan Pawar

CREDENZ.IN

Rashmi Venkateshwaran Sachin Johnson Shruti Phadke Sidhee Hande

Saurabh Shastri

Joint Secretary of Finance:

Design Team:

P.I.N.G. Head:

March 2021

ISSUE 16.1

Pg 44


PISB Office Bearers 2020-2021 Senior Council

Aaryan Kaul Amol Gandhi Devashish Dewalkar Krushna Nayse Isha Pardikar Mihir Bhansali Muskan Jain Omkar Deshpande Onkar Bendre Piyusha Gumte Prathamesh Musale Purvesh Jain Rajavi Kakade

Rohit Nagotkar Rucha Shinde Sanya Gulati Siddhi Honrao Shivang Raina Shraddha Laghate Shreepad Dode Shubham Kirve Sudhanshu Bhoi Vansh Kaul Varun Gattani Yash Biyani

Junior Council Krishiv Mewani Maithili Sabane Manasi Thonte Mufaddal Deewan Muskan Kumari Prajwal Patankar Rohit Kulkarni Sampreeti Saha Sanket Landge Vipul Shinde

Aditi Shriwastava Ajay Kompalwad Akshay Satpute Aparna Ranade Asawari Walkade Atharva Sadre Ayush Das Ellika Mishra Jait Mahavarkar Janhavi Bagul Janhavi Raut

Pg 45

March 2021

ISSUE 16.1

CREDENZ.IN




Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.