P.I.N.G. Issue 17.1

Page 1

The Editor’s Desk

PICT IEEE Newsletter Group (P.I.N.G.) is the annual technical magazine of the PICT IEEE Student Branch (PISB) published alongside Credenz. It is a magazine of international repute that has gained wide recognition since its inception in 2004.

With its engaging articles on current technology, P.I.N.G. aims to instil a sense of technical awareness in its readers. It takes pleasure in having readership from numerous colleges and professional institutes globally, thereby educating fellow students, professors, and professionals on the most recent innovations. We had the opportunity to interact with Mr. Arpit Agrawal, Co-Founder and CTO of Cion Digital, Cakesoft, and Mr. Vikrant Agarwal, Lead Product Manager, Zynga, for Issue 17.1.

We would like to express our gratitude to all authors for their contributions to P.I.N.G. 17.1.

We would also like to express our gratitude to our seniors for their continuous support and guidance in making this Issue a grand success. A special mention to our junior team as this Issue is a testament to their persistent and diligent efforts.

Saima Ansari Designer Shreyas Chandolkar Designer

Dr. Amar Buchade

Branch Counsellor

Dear All,

It gives me immense pleasure to write this mes sage for the new edition of PICT IEEE Student Branch’s (PISB) P.I.N.G. The Credenz edition of P.I.N.G. is always special for all of us. I am thankful to the P.I.N.G team, for making efforts to release the P.I.N.G. issue. It is a great contribution by PICT IEEE Student Branch, which provides an opportunity for all, in cluding student members to showcase their talent, and views and further strengthen IEEE activities. It is a great pleasure to serve PISB as a Counsellor.

It is an interesting, valuable, and great learning experience to work at the branch and IEEE Pune Section level. I am thankful to all the members of the PICT IEEE Student Branch for their active sup port. Due to the active involvement of students, PICT IEEE Student Branch received the outstand ing student branch 2021 award from the Section, outstanding volunteer awards to two students. Stu dent volunteers also received appreciation for involvement in IEEE Pune Section activities. I also received the outstanding IEEE Outstanding Branch Counselor and Branch Chapter Advisor Award 2021 from IEEE for demonstrating the Institute’s commitment to the educational, per sonal, professional, and technical development of students in IEEE-related fields of interest.

I would also like to mention the strong support from Mr. R.S. Kothavale, Managing Trustee, SCTR; Mr. Swastik Sirsikar, Secretary, SCTR; Dr. P.T. Kulkarni, Director PICT, Dr. R. Sreemathy, Principal PICT, and Dr. P.S.Game, the committee in charge, PISB.

PISB provides a platform for students to get more insightful knowledge into the technical world through many activities conducted by its members. At PISB, the events are conducted throughout the year mainly Credenz, Credenz Tech Dayz (CTD), National level coding contest, NTH, Algostrike, Ideathon which are widely appreciated by students, acclaimed academicians, and industry pro fessionals. We, at PISB, will continue to involve students in their technical interests and further strengthen IEEE activities.

Prof. Dr. Amar Buchade, Branch Counselor, PICT IEEE Student Branch

Recent developments in the Industrial Automation Sector Maven

With the industrial automation market currently worth more than $50 billion, this technology has been growing to fulfill a wide array of functions and be used in various business models. Quite simply put, Industrial automation is the use of autonomous systems to operate machines and processes in many sectors using technology such as robotics, industrial controllers like PLCs/ DCS/ and Motion controllers along with computer software. Industries use automation to boost production and cut costs connected with labor, benefits, and other related expenses while enhancing precision and flexibility. As every company attempts to expand its capacity and reach, operational issues present a significant frictional cost for exponential expansion. By gathering data from intelligent sensing devices, industrial automation removes the unpredictability and provides complete transparency into production count, total leakage, and turn-around time. It is no surprise that Industrial automation has been on the rise for quite some time as corporations seek to create products more efficiently while also lowering operational expenses. With this in mind, the industry trend has been moving towards the following practices, which more corporations seem to be adopting by the day.

Predictive Maintenance

With the availability of real-time condition monitoring data analysis for wider performance

analysis, such as machine usage, cycles and running hours, such data may be factored in as an important element of Predictive Maintenance because it allows maintenance programmes to be scheduled in advance, irregularities to be identified, and preventative actions to be implemented. Systems can assess their own status since so many sensors are integrated into the automated production system. This ensures that quick feedback will automatically result in a notification for machine maintenance prior to the problem reaching catastrophic proportions. As a result, machine downtime is decreased, and costs are saved. In other cases, the machines may even be able to perform their own maintenance.

Virtual Commissioning

The design and deployment of a new manufacturing system is frequently a timeconsuming and expensive process. After the design has been finalised and the equipment has been installed, there is one more phase before production handover: commissioning. Controls are incorporated here, defects are discovered and corrected, procedures are established, and operators are trained on new equipment, new processes, or altered procedures. This phase is difficult to plan for and routinely overruns, which delays production and might result in late delivery – and even lost business. In a nutshell, virtual commissioning allows engineers and operators to test new installations, as well as any modifications, throughout both the commencement and maintenance phases prior to applying them in the physical world. As a result, installation and integration proceed more smoothly, with fewer cost overruns and fewer chances of downtime affecting production.

Digital Twin

The digital twin analogues of products have recently become quite popular in the technical world. The digital twin technology, which is part of Industry 4.0, includes producing virtual replicas

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 4

of a product or a process. Sensors are strategically placed to capture data from the physical procedure in order to enable digital printing. These data are then transmitted into the digital twin in real time, where they are processed by AI. Following this, one may run simulations on product actions and properly analyse the functioning mechanism of the object or process. Digital twins may also help to accelerate and simplify application development to address day-to-day industrial difficulties.

Use of IIOT for Industry 4.0

The Internet of Things (IoT) and Industry 4.0 are bringing new possibilities to the manufacturing industry. Early adopters will benefit from digital factories, which are the future of manufacturing. Predictive maintenance is one of the most widely touted advantages of IIoT devices used in the manufacturing industry. Organizations can estimate when a machine will need to be serviced using real-time data generated by IIoT systems. This allows for the essential maintenance to be carried out prior to the occurence of a failure . This is especially useful on a production line, where a machine failure could result in a work stoppage and significant expenditures. A company can improve operational efficiency by proactively resolving maintenance issues. However, one may keep in mind that many of these technologies are always evolving. To remain competitive, businesses must plan ahead of time and deliberate technology adoption.

Increasing use-cases of Additive Manufacturing:

In recent years, 3D printing has gained a lot of attention. The technology has advanced significantly owing to the developments in tools and filaments. Today, 3D printing is being employed as additive manufacturing, allowing for the faster and more accurate creation of industrial prototypes. As the list of printed materials expands across all mediums, businesses must consider what amalgamation of traditional manufacturing with additive manufacturing fits, and how quickly they should embrace the new technology.

Use of AR/VR Technologies

Augmented reality technologies are poised to be one of the most significant inventions, offering up new opportunities for a multitude of businesses globally. The same seems to apply for the automation sector as well. Manufacturers will increasingly use augmented and virtual reality (AR/ VR) applications in the future. Integrating these technologies with IIoT and AI shows enormous potential in a variety of sectors. Product design and development, production, field service and machine maintenance, and customer support and logistics are examples of these. Using augmented reality in the early stages of product development, for example, enables the virtual iteration of product concepts without incurring recurrent prototype costs. As products become more intricate, the number of components increases while decreasing in size. This makes the manufacturing process more difficult and time-consuming for personnel who need to complete error-free activities quickly. Some complex manufacturing processes are now aided with AR-guided interactive work instructions, which help to speed up and eliminate errors while putting less strain on workers. Field service professionals may also use AR and/or VR to perform guided repairs remotely, thus keeping employees safe and saving the organization time and money. AR can also be used to do real-time daily condition monitoring, thus replacing periodic machine maintenance and lowering breakdown losses.

The horizon of industrial automation seems to be teeming with numerous potential opportunities to develop superior processes and provide customers with better goods. With a lot of hard work and a little luck, these trends might make future visions a reality, thus transforming the way we work and live for generations to follow.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 5
Statista predicts there will be
30.9
billion IoT devices by
2025

Unraveling the techie

With Mr. Vikrant Agarwal

Mr. Vikrant Agarwal is an expert in product management, game monetization and data analysis. After graduating from the Pune Institute of Computer Technology, he attended Carnegie Mellon University, and is currently a lead product manager at Zynga. He has played a key role in the production, marketing and live operations of numerous games, including Dawngate, Game of War, Mobilestrike and Farmville 3. He is a frequent speaker about Product Management and Analytics at Tech and Gaming conferences such as Pocket Gamer, Product School, IGDC and India F2P.The video-game industry has become the premier provider of immersive entertainment and has seen an exponential boom during the past few years. A plethora of unique prospects are on the horizon in this field. This interview features his answers to questions regarding his journey in game development, product management, and the prospects of the game development industry.

QHow did you discover your love for video games?

AI’ve always enjoyed playing games. It started at a very young age with board games like Monopoly and The Game of Life and then later on, I discovered games like Contra and Super Mario Bros on the console. By the time I joined PICT, I had developed an interest in competitive gaming and used to compete in tournaments for Counter-Strike 1.6 as well as Age of Empires 2. Some of my fondest memories were playing in collegiate tournaments like ‘Nipun’ during INC at PICT and at the World Cyber Games, which were held at the city and state level across India.

QProduct management is a term that many engineering students are unfamiliar with. Could you please give us more insights about the role and what it takes to become a great product manager?

AAt a high level, a product manager helps lead the vision and the execution strategy for products that a company is making. For example,

when a video game is being developed, a product manager helps decide things like which platform it should be launched on, when the game should be launched, what content the game should have, and which features should be made first.

To achieve these goals, a good product manager leads by influence, working with each one of their stakeholders such as engineering managers, fellow project managers, game designers, quality assurance , marketing, legal, finance, public relations and many more. In addition to this, strong analytical skills and understanding of metrics is key to being a successful product manager.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 6
Colloquium
Mr Vikrant Agarwal, Lead Product Manager, Zynga Game Studios -
“Don’t focus on trying to make a bestselling game. Focus on making the fun. Make that, and everyone will come.”

With Mr. Vikrant Agarwal Unraveling the techie

QYou have worked on numerous games. Could you please take us behind the scenes and elaborate on what actually goes into making a successful game?

QWhat

hurdles do you face every day trying to manage projects on a global level?

AIn

today’s world, there are multiple types of games being made. For instance, there are AAA games, which let us play in large fantasy worlds that are wholly immersive but take many years to develop. Horizon Zero Dawn, Assassins Creed, and Red Dead Redemption come to mind when I think of such games. Another kind of games that are made are mobile games. There are thousands of such games released every day. There is a game for everyone on mobile, whether you want to play a fun word game like Word With Friends, a puzzle game like Monument Valley or an engaging strategy game like Clash Royale. The goal is to provide an experience for everyone on the go. To make a successful game, my goal is to always focus on the customer first, learning from them and changing directions based on what the data shows us.

AThe biggest challenge when communicating globally is making sure everyone is on the same page and driving towards the same goals. Being aware of cultural differences, different workstyles and really, really listening to what your partners are saying goes a long way in establishing camaraderie, even if you’re thousands of miles away and have never met in person.

Because of the pandemic, a lot of us haven’t worked together in person for almost two years. Since we joined new companies during the pandemic, many of us are in the same city but have never even met! Initially, things were a lot harder. But over time, communication styles evolved based on this new normal. Post-It notes became Slack messages and in-person happy hours became online Zoom Jackbox Party hours.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 7

Unraveling the techie

With Mr. Vikrant Agarwal

QWhat changes have occurred in the game development industry since you first began?

AThe mobile games industry changes very rapidly. When I joined, the race was on and studios were trying to make as many games as possible. After that, since everyone was now trying to acquire users and that became expensive, developers started making more engaging long form experiences. Instead of six months, development started taking two years but retention went from a few months to many months, even years in some cases!

After a few years of enjoying these engaging experiences, there was a dearth of simple games without all the complex mechanics that players could just pick up and play on the go. Thus, hypercasual gaming was born. These are very simple, fun games that you can pick up and play really easily. No tutorials, no complex leveling systems, just shoot that ball into the hoop. Or merge these houses. Or topple those tiles.

During the pandemic, all this changed when everyone stayed at home and was able to devote more time to long-term gaming experiences. Now that we’re coming out of the pandemic, people don’t want to lose those vivid and engaging experiences. So we are now seeing a rise in multiplatform gaming experiences like Genshin Impact, where you play that AAA experience at home on console and can carry that AAA experience with you on mobile wherever you go.

QWhat are the various methods for revenue generation from a game, with regards to the development lifecycle of a game? Could you please elaborate?

AFrom a development lifecycle, based on the type of game, studios can start testing with real players as early as the Alpha or Beta development stage. Most monetization testing starts late in the Beta period, during the soft launch phase. If the game is a premium retail game, then it generally

retails for a fixed price such as $40 to $60, with some more premium collector box options. And eventually Downloadable Content (DLCs) helps keep the development team engaged while the design team brainstorms and starts development on the next title.Let’s use an action game series as a hypothetical example. The game launches for a $60 retail price, but has a season pass which contains all the DLCs or you can choose to buy each episode individually. This additional content launch spans a year or sometimes even longer. Based on my experience at other studios, a small subset of the team would generally start working on the next game soon after launch and as the newer title matures, more of the team moves over to help develop and launch the next game in the series.

If the game is free to play, then there’s no cost to play, but it may have cosmetic items that a player can buy to support the developer. For example, a game like Fortnite. is free to play on PC. Everyone’s game experience is equal. But if you enjoy the various vanity content like skins and accessories, you can earn those by putting in time by playing the game or buying them outright.

QThere

is this general notion that there isn’t much money in the game development in dustry, especially in India. Could you enlighten our readers about the scope in this industry and clear up some of the misconceptions?

AThe

gaming industry is one of the fastest grow ing industries in the world. To recognize this, we need to expand the definition of who a gam er is, beyond those who just play console or PC games for long hours. If you play Candy Crush on your phone, you’re a gamer. If you play Ludo, you’re a gamer. The Indian gaming industry has especially been growing at a rapid pace and there are a num ber of large gaming studios that have expanded their operations in India to help develop talent in this fast-growing market.

Gaming used to be this activity that was considered frivolous or “timepass”. Now, it’s much closer to

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 8

With Mr. Vikrant Agarwal Unraveling the techie

something like movie making. There’s an art and a science to it. Some of us as gamemakers are drawn in by our passion for playing games, but stay because it’s just so much fun to make things that surprise and delight people!

QWhat are the various challenges faced in the post-launch phase of a new game, with re gards to player-base retention and expansion?

AGames

generally go through an extensive soft launch period to help find issues before global launch. But no matter how much you test, there’s nothing like the real thing. Servers go down, some feature won’t work or an analytics call won’t fire. I find the post-launch period the most exhilarat ing. You’re getting real customer feedback at scale and you finally see which ideas worked and which ones can be improved upon. You juggle hotfixes to the game, adding in new features, reading game analysis and responding to customers – all at the same time. In terms of retention, the initial goal is to measure D7 retention, followed by D30 and D60. Along with these, there’s a host of other AERM (Ac quisition, Engagement, Retention, Monetization) metrics that are looked at. Once you truly under stand your economy, which can take many weeks, that’s when you can pivot and adjust the longer term roadmap for game expansion.

To learn more about game metrics and breakdowns, I recommend reading the DOF blog https://www. deconstructoroffun.com/blog. They do a great job at analyzing new gaming trends and doing game teardowns.

QWhat are your views on the concept of cloud-gaming, with services like Google Stra dia and Nvidia GeForce Now? Would they ever be a viable alternative to purchasing hardware con soles?

AThe cloud gaming industry is definitely in its infancy right now. There are a lot of barriers to entry, since it requires significant expertise to have these games streamed without any noticeable lag.

What I like about this concept is that it democratiz es games. Anyone can play, having a fancy PC rig or a console would not be a requirement anymore. I still remember that during my PICT days, the most popular games were Warcraft 3, CS 1.6 and AOE2. The reason for this was the low minimum requirements. Anyone could play these games, whether they had a five-year-old desktop or a laptop they were using for their classes. That is why multiplayer games like League of Legends are popular today and if we could bring the vast beautiful worlds in single player experiences to the masses, that could engage and delight so many more people.

QA

lot of game studios tend to subcontract work to other studios while working on large-scale projects. What factors contribute to its ever-ex panding practice, and what challenges may occur as a consequence?

ASubcontracting is a key part of development for any industry, not only games or tech. It is quite important in games, since you don’t want to hire a hundred people, launch a product or game and then have to adjust headcount while you take two to three years to ramp up another game. Having subcontractors helps the studio scale headcount as needed during the game development process.

One of the key challenges is knowledge retention. Yes, it gives the studio the flexibility, but when the contractor leaves, all the knowledge gained goes with them. So you’re constantly retraining folks every few months, which is bandwidth that could have been spent on development instead. So it’s definitely a fine balance.

QWith

the advent of NFTs, how do you antici pate NFTs integrating into games in the near future?

APersonally, I see certain use cases where NFTs could do very well. Currently, if I get a gold tro phy for winning a tournament in my favorite game, all I have is that small picture on my phone. I can

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 9

Unraveling the techie

With Mr. Vikrant Agarwal

share it on social media, but that’s about it. If it was available as an NFT, not only is it something that I can share easily with everyone, but it now has inherent value. Expanding on this concept, imagine a CCG (Collectible Card game), where players collect cards of many different types. Today, in a physical CCG, you are limited to buying and trading cards in your city or country. Or in a digital CCG, you can trade with anyone, but if the game ceases to exist, everything is gone. With an NFT-based CCG, your cards always retain value and you can trade with anyone, across the planet, easily and seamlessly. I’m really excited to see what the industry develops and can’t wait to play this next generation of games!

In the case of AR, it had been around for a while but there was no mainstream application for it. Then Pokemon Go came along and introduced us to location-based gaming. Now, AR has been integrated into Google Maps so that you don’t have to guess anymore whether you turn left or right, you can simply hold up your screen and it tells you where to go.

Our job as developers is to provide the players with experiences they truly enjoy. As someone once told me, “Don’t focus on trying to make a bestselling game. Focus on making the fun. Make that, and everyone will come.”

QWhere

do you see the future of AR/VR in games headed, especially with the concept of Metaverse gaining new ground? What would be the challenges in adapting to these technologies for mainstream consumption?

AWe are definitely in the early adoption phase for these technologies. While the technology is maturing, we are still trying to find the best use cases for these.

We would like to thank Mr. Vikrant Agarwal for taking out time form his busy schedule to provide such intriguing and insightful responses. We hope that our readeres found this conversation interest ing and it opended their minds towards pursuing a profession in the game development industry.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 10
-The Editorial Board

Metaverse

Experience the world like never before

Editorial

In October of 2021, Mark Zuckerberg announced the renaming of Facebook to Meta. Using the word “meta” denotes that an entity is referring to itself. However, it is most likely an abbreviation for the notion of a “Metaverse.”

The term “Metaverse” is a wide one. It mainly refers to a shared virtual world setting that anyone may access via VR or AR through the internet. It may be thought of as an artificial world built on top of our physical reality to build a more immersive internet, where individuals may spend their time. Most platforms now feature virtual identities, avatars, and inventories that are bound to only one platform. A metaverse may allow you to construct a persona that you can take everywhere, just as easily as copying your profile image from one social network to another.

Due to the pandemic, the digital industry has received a massive push. This has led to increased importance of one’s digital presence and identity. Metaverse can play an important role in standardizing this digital presence across the

internet. A metaverse would also make it easier for people to connect on a personal level due to the immersive experience it promises thus making interactions more natural. Metaverse can have applications in multiple domains. With the advent of the metaverse, businesses can move away from the two-dimensional surface of e-commerce and toward life-like virtualized settings for a more immersive experience. This will inevitably lead to a digital economy. Along with new business concepts, Metaverse also supports the production, ownership, trading, and tokenizing copies of realworld assets to strengthen cryptocurrencies and NFTs. In a more utopian view of a Metaverse, these transactions are interoperable, enabling one to transfer virtual objects such as clothing or vehicles from one platform to another. Companies may utilize Metaverse to create more engaging and realistic NFT markets where customers can engage with other users, look at desired NFTs, and therefore make better purchasing decisions.

Virtual reality tourism is another one of the developing metaverse use cases which

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 11

has the potential for widespread adoption and recognition. The first-person point of view is the most significant distinction between visiting a site in person and watching it on video. The metaverse, virtual reality (VR), and augmented reality (AR) may all be combined to create such an immersive digital world.

The Metaverse offers several opportunities for creating a virtual workplace or learning environment. The metaverse can assist in providing

development and progress. Businesses may use blockchain to create decentralized and transparent systems that provide digital proof of ownership, digital collectibility, value transfer, and interoperability. For example, gamers in popular games such as Decentraland use local digital money MANA to purchase virtual land and other game accessories to continue playing.AR and VR are critical to the Metaverse because they provide consumers with an exciting and immersive 3D experience. These two technologies serve as

experiences that make you feel like you’re all working or studying in the same room. The potential for overcoming communication barriers with metaverse is also being recognized by educational institutions. Furthermore, VR simulations in the Metaverse can be used to assist architecture and medicine students in practicing their talents.

The leading technologies which enable this concept are Blockchain, VR & AR and IoT. Blockchain technology is critical to Metaverse’s

gateways to a virtual world. VR is very different from AR, yet it is similar to the Metaverse concept. It generates an entire computer-generated digital environment that users may explore using VR headsets, gloves, and digital sensors.

IoT, as a system, connects our physical environmentto the internet by allowing data to be sent and received via sensors. The internet of things receives data from the actual world and translates it into virtual space, improving the accuracy of digital

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 12
The Metaverse has its origins in Neal Stephenson’s 1992 sci-fi novel Snow Crash. It’s also in Ernest Cline’s Ready Player One and William Gibson’s Neuromancer.

Metaverse is predicted to be worth around $800 billion in 2024 and the Metaverse coin is predicted to be the next big thing in cryptocurrency.

representations. IoT data streams can represent how things in the Metaverse will behave depending on the changing surroundings and other factors in the real world. The diverse applications of metaverse technology demonstrate its multiple benefits, such as accessibility and communication.

Mark Zuckerberg, Facebook’s CEO, has become the standard-bearer of this virtual world, and he is already working on his own metaverse and renaming his social network, but he is not alone. Many firms, including Microsoft’s HoloLens, PlayStation’s VR headgear, Facebook’s own Oculus, Epic Games’ video games, Alibaba’s Ali Metaverse, a new video game company built on Tencet’s metaverse, and even the owner of TikTok, have already stated their interest in this technology.

There are concerns about what the metaverse will imply for privacy, whether it would be inclusive, and how to minimize potentially dangerous content and settings. Since the metaverse is still in its early stages of development, there is a potential to incorporate these features by design.

The concept of metaverse seems to be appealing, which is why many of the world’s biggest technology firms are investing in its creation. If it is successful, it has the potential to change consumer and business behavior. A metaverse would fundamentally alter how we see everything digital around us by creating a virtual universe for us to interact with people.

IoT, as a system, connects our physical environment to the internet by allowing data to be sent and received via sensors. The internet of things receives data from the actual world and translates it into virtual space, improving the accuracy of digital representations. IoT data streams can represent how things in the Metaverse will behave depending on the changing surroundings and other factors in the real world.

The diverse applications of metaverse technology demonstrate its multiple benefits, such as

accessibility and communication.

Mark Zuckerberg, Facebook’s CEO, has become the standard-bearer of this virtual world, and he is already working on his own metaverse and renaming his social network, but he is not alone. Many firms, including Microsoft’s HoloLens, PlayStation’s VR headgear, Facebook’s own Oculus, Epic Games’ video games, Alibaba’s Ali Metaverse, a new video game company built on Tencet’s metaverse, and even the owner of TikTok, have already stated their interest in this technology.

There are concerns about what the metaverse will imply for privacy, whether it would be inclusive, and how to minimize potentially dangerous content and settings. Since the metaverse is still in its early stages of development, there is a potential to incorporate these features by design.

The concept of metaverse seems to be appealing, which is why many of the world’s biggest technology firms are investing in its creation. If it is successful, it has the potential to change consumer and business behavior. A metaverse would fundamentally alter how we see everything digital around us by creating a virtual universe for us to interact with people.

-The Editorial Board

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 13

With Mr. Arpit Agrawal Bridging the Rift

Mr. Arpit Agrawal is an alumnus of PICT and a serial entrepreneur with a slew of renowned startups under his belt, such as Cakesoft Technolo gies, Blockchain Simplified and more recently, Cion Digital. He has accomplished a lot as an entrepre neur as well as in terms of technical innovation. Blockchain is a foundational technology, with the ability to provide a new foundation for our eco nomic and social systems. A multitude of unique opportunities is on the horizon in this sector. This interview covers sir’s responses to questions about his entrepreneurial journey and the prospects of the Blockchain sector.

QWhat

aspects of your college days at PICT do you reminisce over the most?

AThere

are a lot of aspects I miss about college. The friends, teachers, and various other activ ities are what made my college life special. I used to participate in numerous competitions with my friends such as coding contests, paper presenta tion competitions, and many others held in Pune and Mumbai. Submissions and college work were also an integral part of our college life. Overall, it was a fun experience. Now that we have added re sponsibilities of family and career, we realize that our college days were some of the best years of our lives.

QWe noticed in your tenure at PICT, you created a full-fledged operating system from scratch. How has this experience helped you later down the line?

AMaking an OS from scratch was a great learning experience. I was interested in core systems and firmware programming, rather than developing software involving high-level implementation. OS development was my way to gain practical knowl edge in this field. Everyone should try to gain sub stantial practical knowledge in their field of choice by working on projects and taking on internships. The OS I developed was like DOS, in the fact that it did not provide a user interface. I developed a boot loader, file system drives, numerous other low-level

operations, and a few other components. I joined Marvel Semiconductors and worked in firmware development. I worked on real-time operating sys tems (RTOS) and my prior experience helped me quite a lot during my time there.

QConsidering

that you were pursuing a PhD ear lier, what are the research opportunities that an undergraduate may consider exploring towards higher education and what are the avenues it may hold?

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 14
“Make your career in something that you really love to do because you have to spend 8 hours of your day for the rest of your life doing that thing”
Interview

Bridging the Rift

With Mr. Arpit Agrawal

AI worked on numerous research fields during my time at IIT Bombay. I was mostly into the combined applications of computer science and physics. I was working on quantum computing and some of the other theoretical physics con cepts. There are a lot of research opportunities in both hardware and software. I was more inclined to hardware part of it, but my roommates were work ing on the robotics related software that can run on a small chip that has very less battery power avail able and they had to code complex ai algorithms onto that small chip that can take decisions rapid ly in real time. In every field, for example, finance there is decentralised finance and blockchain, in robotics there are mechanical and AI related fields and even in advanced computing, people are do ing research in quantum computing so there are a lot of research opportunities. During B.Tech., stu dents should focus on making their foundations stronger and during their masters they can focus on research part of it.

QSince your career spans multiple organisations like Cakesoft Technologies, Blockchain Simpli fied and recently Cion Digital, what motivated you to transition from an employee to an entrepreneur and what are the key pointers to keep in mind for finding a start-up?

people, and raising investments. So, if someone is really interested in doing start-ups, he/she should target the best MBA colleges so that they can learn all the basics and to make a solid network because the people over there would go on being the suc cessful future entrepreneurs. Also to get a brand name and easier fund raising, your connections come into play. It’s good to work in small and fund ed start-ups than to work in a developed one be cause you get to learn a lot and the diverse roles such as semi technical and semi managerial roles give wonderful experience to make your start-up successful

QLooking at your extensive experience in the blockchain industry, what are the crucial con siderations that one must bear in mind to facilitate a seamless development process?

AAfter

graduation, I tried my own start-up for couple of years but that did not work out well. I was weak in the management part and took some incorrect decisions so to take a break in the entre preneur journey, I worked at Marvell Semiconduc tor for three to four years but from the very be ginning I was interested in entrepreneurship and self-employment. I formed Cakesoft Technologies and worked with around 50 start-ups all around the world. So, during these past seven years jour ney, I learnt the dos and don’ts, fundamental and advanced things to keep in mind and now I ven tured into more risky ventures like Cion Digitals. When you are doing start-ups, you not only need to have knowledge about the technology and the product management but also sales and market ing, managing people, bringing the right set of

AThere are several limitations in blockchain de velopment for example if you must execute a code you need to pay money which is different from the traditional world. You have to money for every transaction in blockchain so each algorithm must be designed efficiently. You must produce some mathematical model and then implement it in blockchain. You must be innovative and produce solutions to several types of problems. You have to think about the user interface and security to avoid phishing. Very less people understand blockchain so to target a larger audience you have to make your product extremely easy to use.

QWith

regards to providing common blockchain platforms as the service, how does one ensure to cater to the unique needs or business models of the organisations with the common platform?

AEthereum

blockchain is like java or .NET frame

work so it is generic enough and has its own language called solidity similar to java. If you want to use Ethereum blockchain to do voting for next election, you can write a voting smart contract which is similar to java classes. If you want to issue your final year engineering marksheet on block chain, you will write smart contracts for that.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 15

Bridging the Rift

With Mr. Arpit Agrawal

So, all these blockchains are very flexible with very extensive set of features and options. Previously, making the smart contracts interact with the external world was impossible but now with protocol and chain link, people have produced various oracles to make them interact with the external world such as to get whether information or stock market information. And your smart contracts can react based on that information. Most blockchains are flexible enough to develop any solution for any industry.

QConsidering your venture in RPA industry, given that RPA is all about streamlining and automating business processes so in today’s data centric world, businesses have to handle a lot of different types of data so how does one handle un expected data inputs or unstructured data format for RPA?

ABlockchain is used for implementing fintech based software. RPA is used in every function in various industries. Now, it is hard to imagine where RPA and blockchain will go in the future.

QComing back to blockchain and cryptocurren cy, how do you see the future of digital curren cy playing a role on a global economic stage and does cryptocurrency as a payment service provid er, provide any comparative advantage over UPI which is the current standard payment service in India?

ATen

or Twenty years back, there were mostly SQL based databases like Microsoft or oracle databases which can store only structured data. You have to define schema and tables and store data in it. But as the software industry grew and people faced the issue of multiple systems interacting with each other and extract data from some websites or some software. So, to cater to this problem, people came up with NoSQL based databases like Mon goDB. In MongoDB you do not need to define the schema, you can just create a collection and store data in these collections in the form of organised Json based documents. If you get data from multi ple sources like RPA or AI, you can just dump follow that data in the form of Json doc into your data base. When you write code, you can choose which part of the data you have to use. As and when your software matures, you can access other fields of this data. The unused fields remain in the database for future use.

AThere are two main things regarding crypto based payments. One is that it is very seamless when you do a cross border transaction. UPI works only within India but lots of businesses are global these days Even swift methods are time consuming and not real time and cost a lot. TransferWise and zoom don’t provide that good exchange rate. So comparatively crypto based payments are instant and does not depend on the location of the send er and the receiver. There is no middleman, so you save on costs as well; unlike in TransferWise, usually there are three banks, senders bank, TransferWise and then the receiver bank. In case of crypto pay ments, it happens peer-to-peer. When you swipe your visa or MasterCard card, they take up substan tial amount of chunk like 2 to 3 % of the transaction amount. UPIs are user friendly for INR money trans action but now lots of people are keeping their sav ings in cryptocurrency like stable coin or volatile coin like bitcoin or Ethereum so they can directly make payments through crypto based methods in stead of the UPI payment. So, both methods can coexist efficiently.

QFrom a consumer’s viewpoint, how would cryptocurrency lending and credit systems stack up against their traditional banking counter parts?

QDo

you see any future possibilities of amal gamating RPA and blockchain for any ground-breaking applications?

ALots of people are investing in cryptocurren cies these days. If they sell their cryptocurren cies, there are multiple problems. Firstly, they have to pay 30% of capital gain tax imposed by Indian

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 16

With Mr. Arpit Agrawal Bridging the Rift

government and secondly, they would miss on the gains due to intact investments. To solve this problem, we have introduced a crypto collateralised lending product. We are tying up with various lender all over the world. These lenders can take crypto as collateral and lend money at an extremely low interest rate, usually less than 5 to 6 % and you can pay at your own convenience. In case of default loan, you can pay by selling the bitcoin. Some 5%of the principal amount can be deducted and the remaining amount can be returned to the borrower. Crypto collateral loan works similar to gold loan.

QSinceyou worked as a manager of multiple or ganisations at the same time, as you get more exposure to the management aspects of an organ isation, how to manage to stay connected with the core technical concepts?

AI work very closely with the engineers, and I help them when they face some problems on the actual coding part as well, for instance, if they are not able to implement some library or in block chain development, the transaction is not getting validated, or it is always failing. So, I always look at the code. I spend 1 or 2 hours in a day follow ing some youtubers who create latest technology concepts. Last year the concept of zero knowledge proof came up and then there was optimistic roll er. These are complex concepts where we can use advanced cryptography mechanisms and solve some of the problems. It is interesting as well. So, I spend some time in learning on YouTube and then implementing it with the engineers and the rest of the time I work in sales and marketing and the management part. I do what I love. Staying close to engineering and technical aspects is what I like so I make sure that I stay connected with it.

QWhat message would you like to give to our readers at P.I.N.G.?

that thing. If you do something that you enjoy, you naturally excel at it, and you would love your life. Many people are confused about doing MS or MBA or software development or testing or applications or systems. Go for the thing that excites you the most. When you are doing engineering, there are lots of job outside coding as well. If you go for busi ness analyst job or project delivery manager job or implement manager or marketing of technical products or technical journalism. Explore enough and you will surely find something you love to do and make your life better and earn a good salary as well.

AMake

your career in something that you real ly love to do because you have to spend the 8 hours of your day for the rest of your life doing

We would like to thank Mr. Arpit Agrawal for taking out time form his busy schedule to provide such intriguing and insightful responses. We hope that our readeres found this conversation interesting and it opended their minds towards pursuing en trepreneurial ventures.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 17
-The Editorial Board

AI4DB

AI meets Database

Over the last five decades, artificial intelligence (AI) and database (DB) have been actively researched. First, database systems have been widely employed in a wide range of applications because they are simple to use, with user-friendly declarative query paradigms and the encapsulation of complex query optimization routines. Second, advancements in AI have lately occurred as a result of three driving forces: large-scale data, novel methods, and high processing power.

Furthermore, AI and databases can benefit from one another. AI has the potential to make databases more intelligent (AI4DB). Traditional empirical database optimization approaches, for example (e.g. cost estimates, join order selection, knob tuning, index and view advisor) are empirical in nature and require human intervention (e.g. DBAs) to tune and manage the databases. As a result, traditional empirical methodologies are unable to meet the high-performance requirements for largescale database instances, diverse applications, and diverse users, particularly in the cloud. Fortunately, learning-based strategies can help to solve this issue. Deep learning, for example, can increase the accuracy of cost prediction, while reinforcement learning can be used to optimize join order

selection and deep reinforcement learning can be used to adjust database knobs.

This article highlights the major AI4DB strategies put forth in the paper “AI Meets Database: AI4DB and DB4AI” by Guoliang Li, Xuanhe Zhou, and Lei Cao, as well as research challenges and unresolved problems.

The Learning-based Database Configuration attempts to automate database setups such as knob tuning, SQL rewriter, and database partitioning by utilizing AI approaches.

Databases feature hundreds of configurable system knobs that govern several critical elements of database performance such as memory allocation. Traditional manual approaches rely on DBAs to manually adjust these knobs based on their experiences, but they always take too long and are incapable of handling millions of database instances on cloud databases. To overcome this issue, CDBTune, for example, treats database tuning as a sequential choice issue and uses reinforcement learning to increase tuning performance.

SQL rewriter may eliminate unnecessary or

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 18
Maven

WayRay, a startup backed by Alibaba, has taken the route of projecting AR data directly onto a car’s windshield, enabling hazard and object detection

Quantum Computing at Google

wasteful operators from logic queries, considerably improving query speed. However, with a sluggish query, there are various rewrite orders, and classic empirical query rewriting algorithms only rewrite in a set order, which may result in inefficient queries. Instead, deep reinforcement learning may be utilized to choose acceptable rules and apply them in the correct order.

Traditional approaches heuristically choose columns as partition keys and are incapable of balancing load balance and access efficiency. Some studies have used a reinforcement learning model to investigate various partition keys and a fully-connected neural network to predict partition advantages.

Learning-based Database Optimization tries to use machine learning approaches to solve difficult database optimization issues such as cost estimate and join order selection.

Traditional approaches cannot adequately capture the relationships between distinct columns/tables and hence cannot give high-quality estimation, so database optimizers rely on cardinality and cost estimation to pick an optimum plan. Using an estimation layer, a recent LSTM-based approach develops a representation for each sub-plan with physical operators and predicates and outputs the estimated cardinality and cost concurrently.

A SQL query may contain millions, if not billions, of alternative plans, therefore it is critical to select a decent plan quickly. Traditional heuristics approaches are incapable of finding optimal designs for hundreds of tables, and dynamic programming is too expensive when exploring the vast plan space. Thus, various deep reinforcement learning-based approaches exist, such as SkinnerDB, which employs Monte-Carlo tree search-based methods to experiment with multiple join orders in each time slice and can optimize the join order on the fly.

Traditional databases are created by database architects based on their previous expertise, but they can only explore a limited number of design options. Self-design strategies based on learning have recently been developed for Learning-based Database Design.

Learned indexes are presented as a way to not only reduce index size but also improve query speed while utilizing indexes, a study argues that indexes are models, and that the B + tree index is a model that maps each query key to its page. Data updates and high-dimensional data are also investigated using learned indexes.

By minimizing data conflicts, effective task scheduling may considerably enhance performance. Traditional workload prediction approaches, for example, are rule-based for transaction prediction. For example, a rule-based method uses database engine domain knowledge to identify signals relevant to workload characteristics, but it takes a long time to rebuild a statistics model when the workload changes, so a study proposes an ML-based system that forecasts the future trend of various workloads. Second, typical database systems either plan tasks sequentially, which does not account for any conflicts, or schedule workloads based on the database optimizer’s estimated execution costs. Another research suggests a learning-based transaction scheduling system that uses supervised algorithms to balance concurrency and conflict rates.

Coming to Learning-based Database Monitoring, traditional approaches rely on database administrators to keep track of most database activity and report any irregularities, however these methods are ineffective. As a result, methodologies based on machine learning are offered for cases such as performance prediction.

Predicting query speed is critical for meeting service level agreements (SLAs), especially for concurrent queries. Deep learning is used in a

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 19

mPATH or Mobile Patient Technology for Health allows clinical providers an option for capturing information digitally, engaging patients in their own health care and reconnecting with patients.

different ML models, hence a single ML platform is required to accomplish unified resource scheduling and uniform model management.

To attain excellent performance, most AI models require enormous amounts of high-quality, diverse training data. However, getting training data in AI4DB is difficult since the data is either security important or reliant on DBAs.

study to estimate query latency in concurrency contexts, such as interactions between child/ and parallel plans. It does, however, use a pipeline structure (which results in information loss) and fails to capture operator-to-operator connections such as data sharing/conflict characteristics. As a result, a more thorough study presents a performance prediction approach based on graph embedding. They characterize concurrent queries with a graph model and incorporate the workload graph in performance measures with a graph convolution network.

Some learning-based database systems are being researched by both academia and industry. SageDB, for example, gave a vision to specialize database implementation by learning data distribution (CDF models) and developing database components based on the information, such as taught indexes and learned query scheduling.

Using AI approaches to improve databases presents a number of issues.

First, there are several types of ML models (e.g., forward-feeding, sequential, graph embedding), and manually selecting appropriate models and adjusting the parameters is wasteful. Second, it is difficult to determine if a learnt model is useful in most cases, necessitating the use of a validation model.

Traditional OLAP is concerned with relational data analytics. However, numerous new data kinds have developed in the big data age, such as graph data, time-series data, and geographical data, necessitating the development of new data analytics approaches to evaluate these multi-model data. As a result, integrating AI and DB approaches to create new data analytics capability is difficult.

Despite substantial research, there are still numerous possibilities and problems in putting AI4DB approaches into reality, which, when adopted, may significantly increase database performance.

Distinct database components may employ - Rajendra Deshpande, Founder, Ardi-Bi Systems.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 20

Transformer Models

Attention is all you need

Technology of the year

the world is still recuperating, research is continuing at a breakneck rate, particularly in the field of artificial intelligence. A certain aspect, particularly pertaining to Transformers does seem to stand out. 2021 saw the transformer architecture extend its applications to a myriad of use cases. Despite the fact that Transformer structures have been around since 2017, architectures such as OpenAi’s GPT-3 and DeepMind’s Alphafold revealed

While

the Transformer’s extraordinary capacity to learn more deeply and fast than prior generations of sequence models, as well as perform well on issues other than natural language processing, and since then has been further expanded and worked on by numerous teams globally while providing groundbreaking results.

Transformers at a Glance

Transformers are a type of neural network architec ture that is becoming increasingly popular. In 2017, scientists at Google Brain published a research pa per “Attention is all you need”, rather aptly named, the gist of which proposed an architecture that

allowed to focus on certain parts of the input se quence while providing an output i.e. the Trans former architecture; the NLP community has never looked back since. Transformers were created to address the issue of sequence transduction, also known as neural machine translation. This includes any task that converts an input sequence to an out put sequence. This covers speech recognition, textto-speech conversion, translation and so on.

Transformers, unlike previous sequence modelling structures such as recurrent neural networks(RNNs) and LSTMs, deviate from the paradigm of sequen tial data processing. They process the entire input sequence at once, employing an attention mecha nism to learn which parts of the input are relevant in relation to others. This enables Transformers to readily link distant sections of the input sequence, something recurrent models have historically struggled with. It also enables major portions of the training to be completed in parallel, making better use of the massively parallel hardware that has been accessible in recent years and significant ly lowering training time.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 21

Hugging Face is a popular open-source library for NLP, with over 7,000 pretrained models in more than 164 languages with support for different frameworks.

The Attention Mechanism

Transformers’ key invention is the multi-head attention block. The attention block seeks to answer the question pertaining to which sections of the text should the model focus on. This is why it is referred to as an attention block. Each attention block comprises 3 inputs: namely the query, key and value matrices.

Each word is represented in the key matrix, and the dot product is effectively a matrix of similarity scores between the query matrix and the key matrix. The dot product matrix is then divided by the square root of the number of dimensions in the key and query matrices to scale these scores. To transform scaled scores into probabilities, a softmax activation function is utilised. These probabilities are known as the attention weights, and they are multiplied by the value matrix to form the attention block’s final output. A single attention block might instruct a model to focus on

a specific aspect of a sentence, such as the tense. Adding several attention blocks enables the model to focus on various language aspects such as part of speech, tense, nouns, verbs, and so on.

Recent Developments

The year 2021 has been a pivotal event for large language models, with all of the main players in technology introducing game-changing technology. DeepMind unveiled the Gopher, a 280 billion parameter transformer language model. According to DeepMind’s study, Gopher nearly reduces the accuracy gap between OpenAI’s GPT3, the largest Transformer model from 2020, and human expert performance while outperforming forecaster expectations. Following that, Google released the Generalist Language Model (GLaM) — a trillion-weight model that employs sparsity. LG AI Research has unveiled the Exaone, a new artificial intelligence language model capable of tweaking

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 22

BLOOM is the first open-source language model with over 100B parameters ever created for languages such as Spanish, French, and Arabic.

Quantum Computing at Google

300 billion various parameters or variables. Microsoft and NVIDIA took it a step further, introducing the Megatron-Turing Natural Language Generation (MT-NLG) model, which has 530 billion parameters making it the current largest transformer architecture. Switch Transformers, a technique for training language models with over a trillion parameters, have already been provided by Google. With such humongous strides in one space, readers are bound to wonder, where would these Models find their use, and if so, which model architecture is suitable for a certain application? This question would seem aptly answerable based on the use case as certain architectures are bound to outperform others in certain tasks, which would also be a tad dependent on the data used for finetuning the model as well.

Potential Uses

While Transformer architectures have been seen to have multiple applications both in the fields of Computer Vision as well as Natural Language Processing(NLP), the major breakthroughs which we have seen are the drastic improvements in the quality and accuracy for NLP tasks mainly pertaining to text classification, text completion, generation of sequences, question answering and keyword identification. A majority of the model architectures use these tasks as rubrics for measuring performance against any pre-existing architectures.

A unique application that seems to have taken the NLP community by storm pertains to the generation of human-like sequences/sentences, which seems to be ever-approaching the zenith of human mimicry. Such applications seem to be the next big thing that may essentially bridge the gap between humans and machines. Certain heartwarming stories such as that of Joshua Barbeau, who imbued the personality of his late fiancee into the chatbot using GPT-3, do enforce a sense of good faith in the direction where applications of such models is headed.

The Future

While companies may spend millions on the research and development of these models thus making them proprietary and inaccessible, a lot of the creators of these models believe in providing these architectures for open use, thus unlocking doors to exploring and experimenting with such state-of-the-art systems. The most notable example of the same would be OpenAI’s GPT-3, which debuted in 2020 but was made available for public use in late 2021 via an API. Hugging Face, another company has been constantly providing cuttingedge pre-trained models for open use, further championing the cause that AI should be open for all, which seems to be the direction headed. From a financial standpoint, the NLP market would grow at a CAGR of 20.3 percent (from 11.6 billion in 2020 to USD 35.1 billion by 2026). According to an article published by Statistica, a research group in October 2021, NLP would grow 14-fold between 2017 and 2025. This is undoubtedly extraordinary progress for a technology that was largely limited to labs even a decade ago. Hence, one may safely assume that Transformers are going to be around for a long time, that is, unless a better architecture does debut. The NLP space would be an interesting domain to observe in the coming years, but attention is all you need!

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 23
-The Editorial Board

Brain Computer Interfaces

Mapping thoughts, Merging realities

Featured

Brain-Computer Interface (BCI) technologies

have become one of the most intriguing research topics. It is a multidisciplinary problem statement that requires the joint application of varied fields such as medical science, Computer science, Electronics, and many more. BCIs have numerous medical and non-medical applications like education and gaming. An application of the Brain-computer interface is to allow amputees to control prosthetic limbs just by thinking about the motion. Another application is helping people suffering from locked-in syndrome to communicate with others via a special interface.

hyperkinetic disorders.

• 1987: Phillip Kennedy builds the first intracortical brain-computer interface by implanting neurotrophic-cone electrodes into monkeys.

• 1998: Researchers at Emory University in Atlanta reported the installation of a brain implant that stimulates movement in a person with lockedin syndrome.

• 2005: Tetraplegic Matt Nagle becomes the first person to control an artificial hand using a brain-computer interface (BCI) as part of Cyberkinetics’ BrainGate project.

Before learning about the latest developments in this field, it is pivotal to take a brief look into the history of BCI technology.

• 1780s: Luigi Galvani shows that muscle and nerve cells have electrical forces responsible for the movement of muscles.

• 1924: German physiologist and psychiatrist Hans Berger records the first human EEG

• 1963: Natalia Petrovna Bekhtereva, published a paper on the use of multiple electrodes implanted in subcortical structures for treating

• 2013: BrainGate patient demonstrates control of a robot prosthetic limb. FDA approves Argus II retinal implant system developed by Second Sight.

• 2017: US Defence Advanced Research Projects Agency (DARPA) launches a program to make neural implants that record high-fidelity signals from one million neurons.

The BCI system is a classic example of a classification problem in machine learning. A BCI system comprises four sequential components:

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 24

Encephalophone is a device that is capable of playing music by recording our brain waves and analysing them. These brain waves are fed to a synthesiser which interprets them and pro duces music.

1. Signal acquisition

This process measures brain signals using a partic ular type of sensor. Such signals are amplified for electronic processing to emove any undesirable signal characteristics. The signals are then digitized and transmitted to a computer.

2. Feature extraction

This step includes examining the digital signals to distinguish the pertinent signal characteristics and represent them in a compact form. One of the extracted features involves the firing of individual cortical neurons responsible for the movement of hands and arms.

3. Feature translation

The derived signal features are fed to the feature relocation algorithm, where they get converted to appropriate commands. When a power decrease occurs in a given frequency band, it implies an up ward displacement of a computer cursor.

4. Device output

Commands processed in the feature translation algorithm provide various functions to the output device. These functions include letter selection, cursor control, etc. This is also the step where feed back is given to the user. These four components are supervised by a protocol that defines the incep tion and timing of the operation. Such a protocol enables this system to work efficiently and be flex ible simultaneously.

The various approaches to designing a BCI are be low.

1. BCI that uses scalp recorded EEGOne of the most researched BCI paradigms is the visual P300 speller, used for typing. It uses the scalp recorded EEG (electroencephalogram) to record the brain signals. It is capable of providing faster response times when compared to other BCI par adigms.

2. BCI that uses ECoG activityECoG (electrocorticography) activity is recorded

from the cortical surface, which requires the im plantation of an electrode array. ECoG records sig nals of higher amplitude than EEG and hence offers superior resolution. This paradigm enables the user to select characters using motor imagery and thus is practical for long-term BCI use.

3. BCI uses activity recorded inside the brain –This paradigm is still under study and immensely applied in the BrainGate2 trial. BrainGate is one such brain implant system that has pioneered in the field of the Brain-computer interface. It is under clinical trials outlined to help those with severe spi nal cord injuries, leaving them impaired with bodi ly functions. A sensor is implanted in the patient, which monitors the brain activity to translate user intentions into a computer command.

In previous incarnations, participants were asked to think about the motions pointing to a particular character which enabled them to type 40 charac ters per minute. The BrainGate collaborators want ed to find a faster approach and machine learning came to the rescue. The team conducted a clinical trial called BrainGate2, which is testing the safe ty of the system that relays information from the brain to the computer. In this trial, two minuscule sensors were implanted in the motor cortex, which is responsible for controlling hand and arm move ments.

The trial participant was a 65-year-old, paralyzed from the neck down by a spinal cord injury. The sensors picked up responses from the individual neurons when the man envisioned writing. The electric signals generated in the brain got detected by the electrodes located on the cortical surface. A machine learning algorithm that interpreted the patterns involving the selection of each letter, was deployed. This process is so rapid that the man could type out sentences at a similar rate to some one typing manually on a keyboard. This system is so fast because each letter evokes a highly peculiar pattern and makes it easier for the algorithm to dif ferentiate between each other.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 25

Will the Brain-computer interface eventually be come advance enough to replace functions for people with neurological disorders? While some might speculate that BCI will be the face of the 21st century, that would not be the case if BCI research ers do not solve problems in critical areas like sig nal-acquisition hardware, BCI validation, and reli ability.

A promising step in improving the hardware of the BCIs was taken by a group of experts headed by Arto Nurmikko, a professor at Brown’s School of En gineering. The research is called Neurograins.

Performing complex tasks such as lifting some thing or speech will require inputs from more neu rons. Taking input from these neurons is a crucial challenge for BCIs. Traditionally, an implant is made through the scalp into the brain cortex, which re cords the brain signals. However, if we want to take input from hundreds or thousands of neurons, we cannot place these implants on the scalp. There is a need to reduce the size of the sensors to implant them directly into the brain. Such devices are called Neurograins.

A research team, including experts in electronics, neurology, and computer science, has made con siderable progress in developing these sensors. The sensors, dubbed Neurograins, independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.

In a study published on August 12, 2021, in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous Neurograins to record neural activity in a rodent.

This complete arrangement is analogous to a mo bile network. The external patch that records the signal resembles a cellular phone tower, employing a network protocol to coordinate the signals from the Neurograins, each of which has its network ad dress. The patch also supplies power wirelessly to the Neurograins, which are designed to run using a

minimal amount of electricity.

Scaling this to humans will take more work to en sure safety and minimize complications. The re searchers hope to eventually build a system that gives novel scientific insights into the brain and new therapies that could help people affected by devastating injuries.

Brain-Computer interfaces remind us that the fu tures we dreamt of as children have started taking shape right now. Further developments will bene fit all of humanity. It will give the disabled a chance at some level of autonomy in their lives. Further research will also provide insights into the func tioning of the human brain and may also lead to the research of loading human consciousness into a computer. A few years prior the BCI system was in the realm of science fiction, but the idea of con trolling machines just by mere thinking has fasci nated us since the beginning and will continue to do so.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 26
MIT’s Brain on A Chip represented human handwriting with 95 percent accuracy in a simulation.

No code AI platfrom

Bringing AI to everyone

Natural intelligence refers to the intellect demonstrated by animals, including humans, while Artificial intelligence refers to the intelligence displayed by machines.

In 1950, Alan Turing published an article called “Computing Machinery and Intelligence” in which he predicted that in the upcoming era, machines would eventually compete with human beings in every field. Today, we can observe that Artificial Intelligence, often known as AI, has replaced many activities that allow robots to learn from experiences and perform cognitive tasks.

No-code development platforms (NCDPs):

In a traditional Computer programming language, to develop an application, one has to code to ensure a correct output as per the requirement. Hence knowledge of code is a prerequisite. Nocode Development allows anyone, irrespective of their programming knowledge, to create software applications with just a simple graphical user

interface that they can configure. Low code and No-code development platforms are designed to quicken the overall application development process. No-code development platforms don’t require code writing since there are prebuilt templates that make application development faster, all while ensuring that business goals are satisfied. Since companies need to deal with simultaneous trends on heavy mobile workforce networks and qualified software engineers are scarce, no-code platforms have gained popularity.

No-code AI comes under the category of the AI landscape. Its main purpose is to make AI accessible to everyone. No-code AI is analogous to no-code development. It means AI is visual, code-free, and often has a drag and drop facility to deploy AI machine learning models. Anyone with a nontechnical background may employ No-code AI to analyze data, classify it, and build more accurate prediction models.

No-code development works on model-driven logic. To some extent, it reduces the barriers

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 27
Philomath

The operating temperature of the D-Wave 2000Q quantum computer is 0.015 Kelvin.

to entry for products and services by extracting values from software-enabled technological platforms. They are also referred to as Visual Integrated development environments (IDEs).

Steps involved in No-code development:

• Requirement gatheringAPIs Selection

• Creation of App workflow

• Front end and SQL queries to customize the code.

• User acceptance testing

• Application deployment

• Application updating on demand

No-code development is especially helpful for people with no technical background, such as business analysts, office administration, and small to medium-scale business owners. These individuals may create software applications or any software component without the assistance of a professional developer.

Businesses use No-code development to test, learn and extract values from different web projects, the Internet of things, Artificial intelligence, machine learning, and blockchain applications.

Some of the No-code development platforms are

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 28

Torrefaction technology developed at Stanford University can convert biomass into a clean-burning fuel. It consists of a reactor that uses the torrefaction process to increase the density of biomass, thereby making it portable.

Significance of No-code AI in Business:

According to Forbes, 83% of firms may need AI for strategic operations, yet there is a talent shortage of data scientists in the industry. With innovations in all sectors, AI talent has been significantly in demand in the last two years. Since around 60% of AI talent is captured by technology and financial service organizations, small businesses must search for freelance data scientists for their AI application needs. Hence businesses should take an active role in the development of AI models. Doing so requires lots of effort in terms of time, effort, cost, and experience.

No-code AI solutions:

There are several providers for No-code AI technologies such as NLP, voice recognition, and analytics. Every vendor has a unique design technique to build AI solutions. Some of them are listed below:

• Data Robot

• Clarifai AI Platform

• MonkeyLearn

• Intersect Labs

• Teachable Machine

• Akkio

No-code AI models play a significant role in this aspect as it reduces the time to build an AI model. Hence companies can adopt machine learning models for their process operations.

Most data scientists have less business experience than domain experts. As a result, in this case, Nocode development plays a significant role by allowing business users to extend their domain expertise, thus, allowing them to design an AI solution as rapidly as feasible.

Some of the good points about no-code development:

• Fast – Since building an AI solution requires a lot of code writing, structuring, training, and debugging models, it may take quite some time to complete, and individuals who are unfamiliar with data science would have to spend much more time on this. No-code development can help to reduce development time by up to 90%

• Low Cost – Automation and No-code can help you save a lot of time and money. As more users can simply construct their models with nocode development, the need for data scientists will diminish.

• Helpful For Data Scientists – With a No-code solution, every person will be working on specific tasks such as business users doing simple tasks, eliminating the need to engage the data science team for this purpose.

In a nutshell, it is guaranteed that No-Code AI will benefit both non-technical workers and data scientists by gradually boosting their productivity since they will no longer be required to perform worm tasks.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 29
• Force.com • Claris • Mendix • Microsoft PowerApps

Mr. Gautam Golwala, Mr. Chetan Pungalia

Alumni of the Year Novellus

Amarketplace

for second-hand fashion sounds novel and exciting. But taking it one step forward and making it a consumer-oriented service can be tricky, which was the idea behind Poshmark, a US-based venture started in 2011 by Manish Chandra, Tracy Sun, Gautam Golwala, and Chetan Pungalia.

Initially, Poshmark was used as an e-commerce platform to sell second-hand clothes. Today it is used to sell second-hand and new men’s and wom en’s fashion, and more. Their company, surprising ly, does not have an inventory since the goods are delivered directly from the sellers to their buyers. The platform initially does not charge you for plac ing an advertisement on their website or app but then deducts a fee once the advertised item sells. Buying and selling on the website is as easy as it gets. The creators have kept the entire process easy and safe. There are various filters, sorting, and de scriptions like other e-commerce websites but at the same time, what makes it different from the rest is that they have maintained a social network as pect alive on their platform. For example, they host

virtual parties called “Posh Parties” where people can catch more attention for their attire listings.

The global pandemic has boosted India’s e-com merce economy indefinitely and paved a path for various companies to accelerate willingness amongst the citizens to embrace online com merce. In the next ten years, it is projected that In dia’s social commerce market will double up which will pave the way for a business like Poshmark to bridge the gap between the vendor and the cus tomer. The diverse community of Poshmark has allowed new sellers the flexibility to run their own business on Poshmark using robust tools. Women make up the majority of the Poshmark community which has provided them support from this unique brand. Apart from part-time sellers and profession al sellers, this brand has promoted various fashion istas to launch their private labels. Poshmark’s op erations, enrollment experience, merchandising, pricing, and shipping in India have been managed by PoshPost, a shipping label unique to the United States Postal Service (USPS). Poshmark has already established its roots in the US market. India is a rel

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 30
Mr Gautam Golwala, Co-Founder & CTO, Poshmark Mr Chetan Pungalia, Co-Founder & SVP engineering, Poshmark

Alumni of the Year

Mr. Gautam Golwala, Mr. Chetan Pungalia

challenges to operating smoothly in the Indian market. India has always been a value-apprehen sive market where affordability is the major force for thriving. India’s perception of pre-loved styles has been changing recently and buyers are willing to embrace this new way of shopping. Poshmark has invested in educating community members on how to flourish their businesses by using Posh mark as a platform to do the same. India has al ways been on the radar for Poshmark to expand its business as two of its co-founders, Chetan Pun galiya and Gautam Golwala, had started their jour ney from the heart of Pune city in Maharashtra, India. Given a large amount of competition from other companies, distinguishing themselves was also a critical part of success in the Indian market. To build a loyal, diverse community of buyers and sellers, they need to earn the people’s trust.

Four elements separate Poshmark from the rest of the industry:

Poshmark is not just an e-commerce site but a so cial marketplace. Unlike other brands, social inter actions among diverse vendors are encouraged, and potential clients are also permitted to ask in quiries directly to the merchants. Poshmark’s seller community promotes a highly dynamic and customized purchasing experience. There is no longer a distinction between shopping in-store and purchasing on an e-commerce web site.

Buyer protection and authentication service for customers is enabled. Poshmark uses the AWS cloud. When a customer uploads a picture of an item to sell, Poshmark creates multiple versions of the image and uploads it on the Amazon Simple Storage Service. Amazon CloudFront further inte grates it to deliver images to customers. Poshmark has benefited from the ever-blooming social media platforms. Influencers play a massive role in the upliftment of the Poshmark communi ty, and many individual sellers have promoted this brand to a whole new level.

great pleasure to acknowledge that

two of the company’s founding members, Mr. Gautam Golwala and Mr. Chetan Pungalia, are alumni of our college.

Mr. Gautam Golwala graduated from the Pune In stitute of Computer Technology in 1996 and then went on to take a Master’s degree in Computer Science from the University of Pennsylvania. Since then, he has worked for a few companies like Yo dlee where he was an early employee and built a robust personal finance data platform using user data. He was also one of the dignitaries for Kaboo dle, an online shopping website where he built information extraction and machine learning sys tems.

Mr. Chetan Pungaliya graduated from the Pune In stitute of Computer Technology in 1992. Previous to Poshmark he was also the Co-Founder and CTO at Inhale Digital and co-founder and VP of Engi neering for Kaboodle. He is a serial entrepreneur with early success in the social shopping space. He has used his engineering abilities to construct and operate highly scalable consumer sites sever al times.

Both of them have done outstanding work since graduating from PICT and have achieved tremen dous success as entrepreneurs in their fields. We hope their story inspires and revitalizes our read ers’ entrepreneurial spirit. We are fascinated by their extraordinary accomplishments and wish them the best in their future endeavours.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 31
It brings us
- The Editorial Board

Type Ia supernova

When a stable star explodes..

White dwarfs are considered one of the most stable stars. These stars have used up most of their nuclear fuel, shrunk to the size of the Earth, and still have the same mass as the Sun. If these white dwarfs are left alone, they can live for billions, if not trillions, of years. A white dwarf with a close companion star, on the other hand, can become a cosmic powder keg. If the orbit of the partner star takes it too near, the white dwarf develops so much that it becomes unstable and bursts. Astronomers refer to this star explosion as a Type Ia supernova.

Though astronomers accept that such encounters between white dwarfs and other ordinary companion stars are one likely source of Type Ia supernova explosion, we were impotent in understanding the details of this process. One way of investigating the explosion mechanism includes looking at the elements left behind by the supernova in its debris or ejecta after the explosion. On 19th October 2021, NASA telescopes captured the colorful blast of a stellar explosion that might have occurred thousands of years ago, leading us to a new aspect of cosmic remains

The above images show G344.7-0.1, a supernova remnant created by a Type Ia supernova, as seen through NASA Chandra X-ray Observatory, combined with infrared data from NASA Spitzer Space Telescope and radio data from the National Science Foundation’s Very Large Array and the Commonwealth Scientific and Industrial Research Organisation’s Australia Telescope Compact Array. Chandra has proven to be one of the best tools available for scientists to study supernova remnants and measure the composition and distribution of heavy elements (i.e., elements heavier than hydrogen and helium).

Astronomers estimated that the supernova remnant G344.7-0.1, which is located roughly 19,600 light-years from Earth, is about 3,000 to 6,000 years old in the planet’s time frame and is thus extremely useful to study. This estimation makes it significantly older than most other wellknown and widely-observed Type Ia remnants, including Kepler, Tycho, and SN 1006, which are known to have exploded within the last 1000 years or so when observed from Earth.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 32
Philomath

Quasars are distant objects in the universe, which have been used by scientists to calculate the rate of expansion of the universe. Distant galaxies work their light through a quasar, creating multiple images of the same object. Variations in Quasar reflect how fast the universe is expanding.

The expanding blast wave and the stellar debris produce X-rays in the remnant elements. The forward movement of the debris is hindered by the resistance from the surrounding gases, thus slowing it down. This creates a reverse shock wave that travels back towards the center of the explosion. As a result of this reverse shock wave, the debris is heated to a million degrees, causing it to glow in X-rays. Type Ia remnants like Kepler, Tycho, and SN 1006 are too young for the reverse shock wave to be able to travel backward to heat all the debris in the remnant center. However, the relatively longer age of G344.7-0.1 means that the reverse shock wave had enough time to travel through the debris, to the remnant center.

The Chandra data has proposed that the reverse shock has heated the highest density iron more than the elements in the arc-like structures. This implies that the highest density of iron is located near the true center of the stellar explosion. The outcomes convince scientists into believing that heavier elements are produced in the interior of an exploding white dwarf.

The G344.7-0.1 image from the Chandra X-ray

observatory also shows that the densest iron is located to the right of the supernova remnant geometric center. This asymmetry is caused due to the higher density of surrounding gases on the right than on the left.

This study gives a broader prospect about where and when these key elements are formed throughout these stellar explosions. Astronomers and scientists certainly believe that there is still so much to learn about these fascinating objects, and Chandra will continue to be a crucial tool in investigating them. There is indeed a vast ocean full of knowledge to quench our thirst for knowing more than what humanity has known about outer space.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 33

Interplay of disorder and fluctuations in physical systems

This year’s Nobel Prize in Physics focuses upon the complexity of physical systems, from the largest scales experienced by humans, such as Earth’s climate, down to the microscopic structure and dynamics of mysterious and yet commonplace materials, such as glass. They show that without a proper accounting of disorder, noise and variability, determinism is just an illusion. Indeed, the work recognized here reflects in part the comment ascribed to Richard Feynman (Nobel Laureate 1965), that he “Believed in the primacy of doubt, not as a blemish on our ability to know, but as the essence of knowing.”

To understand the complexity let’s look at a simple party example, imagine Raj, Ram, Rohit, and other guests irregularly changing conversational groups and partners, hoping to find the best group of people to chat with – yet potentially never finding it. That’s the sub-optimal state complex systems can get stuck in.

simple landscape particle in a bowl can only find one place to rest. While the energy landscape for glass is too complex and rugged it’s not clear where the particle will reside. They are frustrated!!!

Fig. Frustration in spin glass

landscape. The energy landscape for solid is a Spin glass is the best example for the study of Frustration. The term “spin glass” was coined in the early 1970s to describe disordered magnetic systems.

Fig. Energy landscape

Assume that we are having one chamber filled with only gas balls(molecules). If we drop the temperature and increase the pressure of the chamber, gas molecules will condense into liquid and then it freezes into a periodic well-ordered structure but if we freeze it very rapidly it becomes another body, we call it amorphous. It has an attribute of both solid and liquid. Compare it to our life experiences. Glass is a good example of amorphous material. Glass fractures like solid and flows like liquid. Also, we can compare the Glass energy landscape to a normal solid energy

It is an alloy in which iron atoms are randomly mixed into a grid of copper atoms. Even though there are few iron atoms, they change the material’s magnetic property in a puzzling manner. Iron is a ferromagnetic material that behaves like a small magnet or spin, which is affected by the nearby iron atom close to it. Imagine a triangle with a magnet placed on each corner. The magnets can have either their north poles up or down. No two corners can have the same orientation. When two magnets satisfy the constraint, two others do not and no amount of flipping orientations will satisfy the constraint – the system is “frustrated”. It is the very simplest and Idealised setting to get to know how spin glasses and other systems are frustrated. P. Anderson (Nobel Laureate 1977) argued that “The history of spin glass may be the best example I know of the dictum that a real scientific mystery is worth pursuing to the ends of the Earth for its own sake, independently of any obvious practical importance or intellectual glamour.”

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 34

Researchers from LMU and Saarland University have entangled two quantum memories over a 33-kilometer-long fiber optic connection - a record and an important step toward the quantum internet.

Spin glasses are a fascinating and interdisciplinary research topic that in the last forty years has inspired vast scientific literature in the framework of theoretical and experimental physics, mathematics, computer science, finance, and so on. Edwards and Anderson proposed an archetypal model, the so-called Edward-Anderson model (EA), that inspired all the theoretical spin glass models that have been developed until now. They

(EA) model by using a replica trick, by knowing that ferromagnetic materials have only two pure states (up and down) in order phase so there must be infinite states in ordered phase Spin glass. Not only did this provide the solution, but it had a stunning array of extensions to a wide range of spin-glass and another system. During solving these types of complex problems Giorgio Parisi introduced the world to new order parameter

considered a spin glass version of the Ising model, constituted by a set of N >> 1 Ising spins σ = (σ1, σ2, …..,σN) {−1,1} N, with Hamiltonian where the Ji k are uncorrelated Gaussian random variables with zero mean and variance Jjk2= kik. Frustration emerges by allowing both ferromagnetic and antiferromagnetic couplings, and hence we expect a “corrugated” energy landscape with many longlived metastable states. It is generally believed that such a model reproduces the main feature of a real spin glass.

For solving such a complex problem Edward and Anderson introduced a one-trick name as “Replica Trick”. Replica trick is a mathematical technique in which many copies, replicas, of the system are processed at the same time. Giorgio Parisi (Nobel laureate 2021) has solved Edward and Anderson’s

Qαβ = ( 1/n ) ∑j <σi>α<σi>β.

Giorgio Parisi’s fundamental discoveries about the structure of spin glasses were so deep that they not only influenced physics but also mathematics, biology, neuroscience and machine learning, Artificial intelligence because all these fields include problems that are directly related to Complexity and frustration. We can use complex concepts like frustration to resolve biological problems. Like “Protein folding problem”. The “Quenched Edward Model” explains qualitatively some properties of the rugged energy landscape of protein and their dynamics. Protein folding and structure prediction were revolutionised by Machine Learning. Google’s AlphaFold achieves unprecedented accuracy in the prediction of protein structure from their sequence

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 35

NASA’s James Webb Space Telescope has recently revealed emerging stellar nurseries and individual stars in the Carina Nebula that were previously obscured due to limited capabilities.

of amino acids. Parisi and his collaborators have shown that

Parisi and his collaborators have shown that in shown that in the neural network model, and its many offspring, the multiple memories stored in the network correspond to the multiple equilibria of the spin glass. Moreover, their methods allowed them to address the classical optimization problem of the travelling salesman who stops at many local minima but of course, the global minimum is the target of interest.

The infinite-dimensional Spin Glass model was introduced by Parisi from which we can know a new type of physics. By studying complexity, we can predict the result more accurately. The study of complexity allowed us to derive unprecedented conclusions about such systems that, on the surface, look random, unpredictable, and impossible to model theoretically.

After more than two centuries, there are too many

unsolved problems related to complexity. We are not able to predict weather more accurately. We are facing problems in AI and ML. We don’t know yet how to solve the spin-glass 3-D model. There is a chance that you can win a Nobel prize by solving these types of mysteries in the upcoming years. One definitely, we will be able to solve these complex quantum paradoxes and bring a big revolution in machine learning and various domains.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 36
-Dhaval Chande Pune Institute of Computer Technology

Neuro-morphic computing Pansophy

Building the silicon brain

The

dream of developing a human-like robot has been universal among humans. We want our computers to come up with something new or out of the box without us having to program anything specific for it.

Some researchers have been trying to turn this dream into reality. Neuromorphic computing is a massive step toward this attempt. Splitting the word neuromorphic yields two parts. First, neuro relates to neurons or the nervous system, and sec ond, morphic meaning has a similar form or struc ture. In simple terms, neuromorphic computing is a computer architecture that closely resembles the human brain. It differs from the Von Neumann ar chitecture, which is being used by most computers today.

The question here is to find out the reason behind modeling the architecture after the human brain. There are two key factors here, energy consump tion, latencies, and bottlenecks due to the struc ture itself.

It has been tested and proved that the brain re

quires less energy for operation. It is incredibly en ergy efficient compared to traditional computers. If data storage and communication continue to increase at the current rate, the total energy con sumed by the binary operations using CMOS will surpass ~1027 Joules in 2040, which exceeds the total energy produced globally today. There is a need to reduce energy-hungry computer architec tures for the future of sustainable computation.

The other inherent drawback of Von Neumann’s ar chitecture is that the memory and the ALU are sep arate components. Hence, during any processing/ computation, the control unit has to fetch and store data information in the memory. The memory bus used for this has limited bandwidth, which puts a fundamental limit on computational power and is energy demanding. It would be better to co-locate the memory and the ALU. This type of structure exists in our brains. There is no separate memory storage location or any dedicated arithmetic logic unit. It does not need a compiler and different neu ron-level code. Each neuron acts as a self-sufficient memory device or arranges in a large combination of neurons to serve some specific purpose.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 37

In a study conducted by Mindshare, it was found that the area in our brain dedicated for encoding memory generates three times more activity from AR content than it generates from nonAR content.

The dream of developing a human-like robot has been universal among humans. We want our com puters to come up with something new or out of the box without us having to program anything specific for it. Some researchers have been try ing to turn this dream into reality. Neuromorphic computing is a massive step toward this attempt. Splitting the word neuromorphic yields two parts. First, neuro relates to neurons or the nervous sys tem, and second, morphic meaning has a similar form or structure. In simple terms, neuromorphic computing is a computer architecture that close ly resembles the human brain. It differs from the Von Neumann architecture, which is being used by most computers today.

The question here is to find out the reason behind modeling the architecture after the human brain. There are two key factors here, energy consump tion, latencies, and bottlenecks due to the struc ture itself.

It has been tested and proved that the brain re quires less energy for operation. It is incredibly en ergy efficient compared to traditional computers. If data storage and communication continue to increase at the current rate, the total energy con sumed by the binary operations using CMOS will surpass ~1027 Joules in 2040, which exceeds the total energy produced globally today. There is a need to reduce energy-hungry computer architec tures for the future of sustainable computation.

The other inherent drawback of Von Neumann’s ar chitecture is that the memory and the ALU are sep arate components. Hence, during any processing/ computation, the control unit has to fetch and store data information in the memory. The memory bus used for this has limited bandwidth, which puts a fundamental limit on computational power and is energy demanding. It would be better to co-locate the memory and the ALU. This type of structure exists in our brains. There is no separate memory storage location or any dedicated arithmetic logic unit. It does not need a compiler and different neu ron-level code. Each neuron acts as a self-sufficient

memory device or arranges in a large combination of neurons to serve some specific purpose. To understand the ongoing research in this field, we need to briefly look into how our brain works to learn something new, think, or solve a problem, which is quite fascinating to know. Our knowledge regarding the functioning of our brain is limited. Numerous ongoing research projects are aiming to gain insights into the same.

When a neuron carries an electric signal within itself, on reaching its tail end, there may be thou sands of synapses that trigger the release of neu rotransmitters. Neurotransmitters then pass onto this nerve impulse to neighboring neurons and get attached to their receptor sites.

In terms of digital electronics, this is setting a par ticular state for multiple neurons with a selective and specific release of neurotransmitters and ions. The neuron behaves analogously to a transistor by controlling the flow of charges and setting states. These states will decide how the neuron will in teract with other neurons the next time when a signal passes through it. On deeper comparison, scientists concluded that this action is similar to a resistor having some memory state of the past in teraction (hysteresis), which gave birth to the com ponent called Memristor, which was coined in 1971 by Leon Chua. However, we have only succeeded at creating memristors as active components (requir ing external power) and not as a true memristor (not requiring any external power). This is still an active field of research. There was a controversial claim in 2008 from a research paper that Hewlett Packard had found the required Memristor.

Recently, there was an issue by Nature Nanotech nology on non-von Neumann computing regard ing neuromorphic computing. Intel labs also have developed second-generation chips called loihi2 and lava with massive chip improvements over cur rent chips. Different scientists have different opin ions when it comes to talking about the usability of memristors and would they solve our targeted

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 38

problems. Let us understand the way these mem ristors work or are supposed to work. Memristors contain a sample. When a current flows through these samples, an electric field is induced which causes doped impurities in them to move/diffuse by the required levels.

times

The above image illustrates the working of the Memristor in comparison to the working of a Neu ron. As you can see, when an electric impulse pass es, impurities diffuse to other locations (State 1 versus State2) and thereby change the Mermistor’s resistance. Now, this state of resistance is preserved until it receives any other electric impulse, so is the case with the synapse of a neuron. This act of pre serving the state helps us memorize and identify things. Every time we look at a new object Neurons fire up and create a specific condition according to the received impulses. As we learn more about that object, the diffused pattern becomes better. Our Neurons take a snapshot of the ion concentration during that impulse, and we remember that object. What is even more intriguing is that when we prac tice an activity regularly, more neurons get

arranged to store the states of that activity, mak ing it a long-term memory and we get better and better at it. This process is analogous to adjusting weights according to the newly known features (feature extraction) in machine learning.

Whenever we try to remember something, we search through these stored patterns, and depend ing upon how the state was stored, we remember it. Similarly, Memristors store this state in the form of a resistance value which can be used to adjust parameters while learning. Also, it is a solid-state device as compared to fluid-based neuron trans missions. This state retention/hysteresis property sets Memristors apart from regular resistors which follow Ohm’s law. It acts as an artificial synapse an

Whenever we try to remember something, we search through these stored patterns, and depend ing upon how the state was stored, we remember it. Similarly, Memristors store this state in the form of a resistance value which can be used to adjust parameters while learning. Also, it is a solid-state device as compared to fluid-based neuron trans missions. This state retention/hysteresis property sets Memristors apart from regular resistors which follow Ohm’s law. It acts as an artificial synapse and is a better (more efficient) form of hardware for neural networks such as recurrent neural networks. In conclusion, the concept of neuromorphic com puting and Memristors is still vastly under research but conceptually it has been proven to be much better than the current Von Neumann architecture and also a better hardware architecture for artifi cial intelligence software, which together one day might break the barriers of being unconscious to turn conscious (as we know it).

Pune Institute of Computer Technology

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 39
-Sagar Abhyankar
Intel’s Pohoiki Beach computer features 8.3 million neu rons. It delivers 1,000 times better performance and 10,000
more efficiency than comparable GPUs.

The new modus operandi

Robotic Process Automation Philomath

Automation provides the capabilities to the machine to automate repetitive processes. Such automation is possible with the help of dedicated software technologies. The IT sector produces volumes of data for back-office tasks. Such tasks can be automated to reduce human error and increase efficiency. Process automation, Integration automation, and Artificial automation are the potential areas that can be explored.

In recent times, emerging automation technology in industry 4.0 is Robotic Process Automation (RPA). RPA is capable of automating the business process es of various domains such as education, health care, retail, telecommunications, and banking. It can easily automate manual data entry operations like keeping track of invoice details and maintain ing records. RPA is capable of automating the re petitive tasks performed by humans, resulting in increased efficiency and productivity of the firm. RPA’s rapid adoption in different industrial areas could pave the way for the future of automation. RPA, as the name implies, is a software solution technology for automating the repetitive process using robots (bots). RPA is capable of automating the bulky, tedious, and time-consuming processes without any human intervention.

Many processes/tasks are repetitive in nature and do not require cognitive human skills for data de cision-making. The routine tasks with deterministic outputs involve mostly back-office tasks with no consumer involvement. Such tasks are the poten tial application areas of RPA. Currently, most of the industry deals with data through spreadsheets, Enterprise Resource Planning (ERP), and Customer Relationship Management (CRM) software to auto mate reading, writing, and extracting the data from the files. With the technological shift from these legacy systems to RPA, humans are free to focus on the more value-added and decision-making tasks.

Moreover, the RPA is not limited to performing re petitive tasks only. Using machine learning and ar tificial intelligence the robots can be trained to per

form more complex and cognitive jobs with lesser error rates.

RPA has proved to increase the efficiency and pro ductivity of the firm. According to Deloitte, 53% of companies have switched to RPA, resulting in a 90% increase in quality, an 86% increase in produc tivity, and a 59% cost reduction over the company’s traditional system. According to Capgemini, the advantages of RPA to businesses include improved delivery, employee happiness, and productivity in creases, all of which lead to a faster return on in vestment (ROI).The untapped sectors to leverage the power of RPA are healthcare, finance, IT ser vices, data extraction, and many more. From auto mating the workflow and reducing labor-intensive tasks to leveraging it using AI, RPA can prove to be quite a promising player.

RPA statistics reveal that throughout the last two years, from 2019 to 2021, RPA has cleared the road for automation to a large extent. The Asia Pacific (APAC) area expects to be the fastest-growing mar ket for RPA in the future. RPA will further continue its growth if utilized to its full extent. This trend proves that RPA is more than just a buzzword, as more and more firms are opting for automation through RPA.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 40

Ethereum 2.0

The descent of Bitcoin’s dominance?

Philomath

As the popularity of cryptocurrency grows and more people fully comprehend how it functions, its two leading players, Bitcoin and Ethereum, are being heavily criticized for the carbon footprint of their ‘mining activities’ that keep the blockchain operating. With Tesla CEO Elon Musk expressing concerns about the massive usage of fossil fuels for Bitcoin mining and transactions, the automaker has stopped taking Bitcoin payments, and many other businesses are switching to greener cryptocurrencies. According to the Digiconomist’s Ethereum Energy Consumption Index, Ethereum’s annual energy consumption is comparable to that of the Netherlands.

Because of the high demand, transaction fees have risen, making Ethereum costlier for the typical user. The amount of disc space required to implement an Ethereum client is constantly rising. The proof-ofwork consensus mechanism, which is responsible for Ethereum’s security and decentralization, has a significant environmental impact.To address these problems, Ethereum outlined a series of enhance ments that would transform it into a much more scalable, secure, and, most importantly, long-last ing blockchain. While this is going on, Ethereum’s

core value of decentralization would remain intact. In early 2022, in an event termed “the Merge,” Ethe reum will release its most significant upgrade yet, promising to reduce the network’s energy con sumption by more than 99 percent, positioning the blockchain, which was launched in 2015, as the big “green” choice for crypto users and developers.

The transformation of Ethereum into Ethereum 2.0 involves three phases, namely the Beacon Chain (Phase 0), the Merge(Phase 1), and the Shard Chains(Phase 2).

The Beacon Chain

Ethereum has traditionally used the ‘Proof of Work’ consensus algorithm to validate transactions and create new blocks. To be qualified to publish a block of transactions, a miner must solve an arbi trary computational puzzle faster than every miner, which demands a huge amount of disc space and processing capabilities. Solving this puzzle gener ates competition among miners, as well as costs in the form of energy consumption. A fraudulent miner would have to win the proof-of-work race continuously to successfully scam the blockchain, which is extremely unlikely and incredibly

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 41

92% money you earn, transact with, use to buy goods/ser vices and so on exists only on computers and hard drives. Only an estimated 8% of currency globally is physical money.

expensive.

Proof-of-work is a successful method of securing the network, although it is inefficient in terms of energy use. A sustainable future for Ethereum has already been developed in the form of the Beacon chain, a proof-of-stake (PoS) chain that went live in December 2020. In PoS, validators replace miners, who serve the same role except they deposit ETH as collateral against dishonest behavior instead of spending their resources up-front in the form of computing activity. The staked ETH can slowly drain out if the validator is idle (offline when they are supposed to execute some validator job), while evidentially dishonest activity results in the staked assets being “slashed.” This strongly encourages active and honest participation in securing the net work. As a result, arbitrary puzzle solving is elimi nated, resulting in a significant reduction in energy expenditure.

Currently, a person must stake 32 ETH to become a full validator or deposit ETH to join a staking pool. The enlarged network of shards and stakers are managed or coordinated by the Beacon Chain. However, it is not the same as the Ethereum Main net of today. It is incapable of working with ac counts or smart contracts. By maintaining the PoS chain separate from the main network, a ready touse solution can be developed without jeopardiz ing the Ethereum PoW chain’s thriving decentral ized application platform. The beacon chain is now online, ready for action, and protected by millions of ETH placed across over 240K validators, thanks to a one-way bridge from the PoW network to the PoS network that began receiving deposits in No vember 2020. The beacon chain has been a com plete success since its introduction in December 2020, having completed 100% of its epochs with out any downtime.

The Merge

While the beacon chain is an ideal method for alter ing the Ethereum consensus mechanism, the Ethe reum network will not remain divided indefinitely.

When these two systems officially join together, it’s termed “The Merge.” Sometime in 2022, Ethereum Mainnet will “merge” with the Beacon Chain, be coming its own shard that applies proof-of-stake rather than proof-of-work. Ethereum’s history on the PoW network will be retained while the PoS consensus layer is integrated into as a substitute for PoW to completely realize the transition to PoS. Stakers will be allocated to validate the Ethereum Mainnet once The Merge occurs. Miners may likely invest their revenues towards staking in the new proof-of-stake system as mining will be no longer required.

None of the Ethereum network’s transactions will be lost in the process of this transformation - “The Merge” will not influence the Ethereum network’s data layer. To facilitate a smooth transition for all ETH holders and users, Mainnet will bring the abil ity to run smart contracts into the proof-of-stake system, as well as the whole history and the present state of Ethereum. “The Merge” isn’t a new Ethere um version, but rather an exciting enhancement to Ethereum’s consensus layer, bringing it closer to the original goal put out at its inception.

The merger will mark the end of Ethereum’s proofof-work era and the beginning of a more sustain able, environmentally friendly Ethereum. Some functions, such as withdrawing staked ETH, will not be available right after The Merge. A postmerge “cleanup” upgrade to handle these features is planned, and it should happen soon after The Merge is done.

The Shard Chains

The fundamental scalability issue that blockchains, including Ethereum, are now dealing with is that each node must verify and execute each transac tion. In computer science, there are two basic scal ing techniques: vertical scaling (essentially, making nodes stronger) and horizontal scaling (which in volves just adding more nodes).

Blockchains must scale horizontally to be

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 42

Microsoft Bing is more focused on on-page optimization and incorporates social signals, while Google is more focused on E.A.T. and links.

decentralized. Nodes running on consumer hard ware are one of the goals of Ethereum 2.0. Shard ing is the word for dividing a database horizontally.

A shard chain is typically processed by a subset of nodes. Validators, or virtual miners, are assigned to shards and are only responsible for processing and validating transactions in that shard (chain).

Validators must only store and run data for the shards they are validating, not the entire network (like what happens today). This decreases the amount of hardware required and speeds up the process. The security of shards is the main issue in sharding a blockchain. Because validators are dis persed among shards, a single shard could be tak en over by fraudulent validators. The use of random shuffling of validators, in which each shard block has a (pseudo) randomly chosen committee of val idators, ensures that an attacker with less than 1/3 of all validators cannot attack a single shard.

As a result, Ethereum 2.0 is well on its way to solv ing the scalability trilemma, which aims to make Ethereum secure, scalable, and sustainable.

The shard chains upgrade will split the network’s load into sixty four new chains. This would allow Ethereum to breathe easier by lowering conges tion and increasing transaction speeds above the present limit of 15-45 transactions per second. And, despite the fact that there will be more chains, validators - the network’s maintainers - will have to do less work. Validators will only have to ‘run’ their

own shard, not the entire Ethereum network. This makes Ethereum’s nodes lighter, letting it scale while remaining decentralized. The Ethereum network will be more vulnerable to assault if it switches to proof-of-stake. Since valida tors that are responsible for securing the network must stake a significant amount of ETH in the sys tem, if they try to attack the network, the system will permanently destroy their ETH.

Ethereum will soon be able to operate on a person al laptop or phone thanks to sharding. As a result, in a sharded Ethereum, more users should be able to participate or execute clients. This will improve security since the more decentralized the network, the smaller the attack surface area. Sharding will make it easier to deploy clients on your own with out depending on any intermediary services, due to cheaper hardware needs. The Beacon Chain not only boosts trust in the proof-of-stake mechanism, but it also enables estimations of Ethereum’s postmerge energy usage. According to a recent calcu lation, upgrading to proof-of-stake may save 99.95 percent of total energy, as proof-of-stake is 2000 times more energy efficient than proof-of-work. For each node on the Ethereum network, the ener gy expenditure will be roughly equal to the cost of running a household computer.

Thus we can conclude that Ethereum 2.0 promises to be a revolutionary fix in the crypto field which is certainly needed to be explored and implemented for achieving climate goals for a better world, while being reliant on technology for carrying out the world’s trade more than ever.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 43

Future of Mars Exploration

Would you move to Mars?

This year has been pioneering in the domain of Space exploration. NASA’s Perseverance rover accompanied by the Ingenuity helicopter and CNSA ‘s (China National Space Administration) Zhurong rover touched down successfully on the surface of Mars. They have been exploring the surface of Mars, keeping in mind the mission objectives. They aren’t the first rovers to explore Mars. Several other rovers have been sent in the past, out of which one (NASA’s Curiosity Rover) is still operational. Mars has been explored for almost 50 years now.

tubes to collect Martian soil samples from various locations. As of October, it has collected and sealed 6 of those 43 tubes. PIXL (Planetary Instrument for X-ray Lithochemistry) is an X-ray spectrometer situated at the end of the rover’s robotic arm that maps the elemental composition of rocks. Another onboard instrument, SHERLOC (Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals), maps the distribution of organic molecules in rocks. This instrument provides scientists with an idea of where to look

The objectives of the Perseverance rover are to, “Seek signs of habitable conditions on Mars in the ancient past and search for evidence/biosignatures of past microbial life and water.”

Moreover, NASA plans to return samples of the Martian soil by 2030 to study the Martian geology in depth. Hence, the rover is a link for the future Marssample-return missions and manned missions to Mars. As of October 2021, Perseverance has concluded that Martian rocks have interacted with water multiple times, some of them containing organic molecules. Also, for the Mars sample return, Perseverance carries 43 cylindrical-shaped

for samples.

Why is Mars under observation to such an extent? This question has an obvious answer. The first reason is to understand what made Mars turn from a wet and life-sustaining planet to a dry and cold one, that is, to understand the geology in-depth, its biosignatures, and trace paths and areas where life could have existed in the past. The second reason is for the benefit of humanity. Human Exploration has led to numerous discoveries, and we need to understand more about our Solar system. The Martian surface may have minerals

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 44

Mars has the largest canyon in our solar system, Valles Mariner is. It is 4 miles deep and stretches thousands of miles long

that could have extraordinary properties ideal for future development. The idea of exploring worlds beyond. beyond our own is a giant leap.

What are the challenges of going to Mars?

There are innumerable challenges to achieving this feat. Radiation exposure is one of the most underrated challenges. Toxic soil, low gravity, lack of water, massive dust storms, transportation, several frequent resupplies of resources, and communication delays are some challenges that make it difficult.

Relying on robotic exploration is our best bet for now since it guarantees higher chances of success. Human Mars exploration can serve as a springboard for several missions to other parts of the Solar system. It would also highly impact the general masses. More private agencies would want to get in on this revolution. Applying our capabilities to the fullest would lead to massive advancements. Exploring Mars is the first step to exploring moons like Europa and Titan. It would prove that we can spend a complete 2–3-year mission exploring other planets.

For this to turn into reality, there are hundreds

of experiments conducted by several space agencies, not only on the ground but also on the International Space station that orbits our planet at an altitude of roughly 250 miles (400 km). Biological, technological, chemical, and physical research is conducted vastly. Commercial agencies have joined and partnered with government agencies to accelerate the progress. Human moon missions for creating the lunar gateway will occur in this decade, which will act as a testbed for human Mars missions. There is a lot to accomplish, and numerous robotic missions will occur before sending humans there for the first time.

The gradual transition of humans being from low earth orbit to the Moon and then eventually to Mars makes this decade interesting. The idea of humans exploring an entirely different world soon is daunting. Hundreds of years and generations later, the humans dwelling on Mars will look at the sky while pointing at the pale blue dot and remember where humanity had originated.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 45

A solution against Counterfeiting

The Self-erasing Memory Chip Philomath

If you have seen the battery of your phone expanding, creating a bulge in their cases; ever wondered if this problem could be due to counterfeit batteries? A counterfeit electronic component is a part which is created by altering the legitimate component and this problem is not limited to just your phone batteries. Imagine the harm that could be caused by tampering with a device to make it send information to a third party, or secretly installing a listening device. This problem has become so extreme that 15% of pentagon components are expected to be counterfeits.

A sandwich of a single layer of tungsten atoms between two layers of selenium atoms is creat ed and transferred onto the azobenzene-coated chip. When exposed to UV light, the azobenzene molecules shrink and stretch the tungsten disel enide atoms above it, causing the semiconductor to emit light of slightly longer photoluminescence wavelengths. The azobenzene naturally gives up its stored energy in a few days, making the chip self-erasing and ideal for creating tamper-detec tion systems. This period could be changed with heat and light exposure. Alternatively, it can also be erased in an instance with a flash of blue light, making it ready to incorporate a new message.

Recently, a group of researchers from the University of Michigan developed a chip that could help stop counterfeit electronics or provide alerts if sensitive shipments have been tampered with. This tamper ing could be avoided by writing an authentication barcode or a visible secret message on the chip and placing it inside the device. The chip is made using a new magical material that makes the mes sage visible, but it vanishes as soon as it comes in contact with UV light, i.e., the sunlight. If the device has been tampered with on their journey by open ing the system’s case and exposing it to light, the owner will know about it by checking the message printed on the chip by reading it with the right kind of light.

The new material used for creating the chip is a ‘be yond graphene’ semiconductor that works slightly differently than ‘graphene’, i.e., it emits light when its molecules vibrate at specific frequencies. Lay ing on top of a thin film of azobenzene molecules, the semiconductor stores energy temporarily and changes the color of emitting light, effectively al lowing visible messages to be written on the chip.

Installing these chips inside products or security systems can disburden some of the component counterfeit or tampering risk. However, at present, the only stumbling block is that the stored energy only lasts about seven days in the dark. With a lon ger lifespan, these barcodes could be written into devices as authorization keys. The researchers are working in this direction, and these chips could prove to be a great route to assurance of authen ticity.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 46
-Sangeeta
Singh Pune Institute of Computer Technology

Averting hackers using sticky-tape

It’s quark-y

Technology has evolved immensely in recent years. It is almost impossible to imagine one continuing their day-to-day life without it. With the rapid evolution on the rise, various new and even more advanced technologies are coming up. Quantum computing is one such advanced new technology, which is based on the principles of quantum physics. We live in a world where powerful computers are available everywhere, even supercomputers that can perform operations

ly new technology, still has found applications in various domains. A new generation of electric ve hicles using quantum battery technology has been developed by IBM and Mercedes in collaboration. Reducing atmospheric carbon emissions using quantum computing aided material discovery is another application. Quantum computing also finds application in various research facilities, for AI ML computation, etcetera.

in split seconds. Despite this, there are a few prob lems that even these state-of-the-art supercom puters struggle to solve. This is where quantum computers come into the picture.

Peter Leek, a lecturer and quantum comput ing expert at Oxford University, says “ “classical” computing has been an incredible 20th-century achievement, but the way we process information in computers now still doesn’t take full advantage of the laws of physics as we know them. Work on quantum physics, however, has given us a new and more powerful way of processing information. “ Quantum computing, even though it is a relative

Cybersecurity is one such domain where quantum computing is in use, most popularly under the guise of quantum communication. Extensive work is taking place in this domain. Its most enthralling application is protecting information channels against eavesdropping using quantum cryptogra phy.

With the rise in the application of technology in our day-to-day life, data security and data privacy has become very important. Data encryption and de cryption are at the core of data transfers. Hackers tend to use brute force methods to decrypt sensi tive information, which can cause severe

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 47
Philomath

damage and loss, depending on the sensitivity of the information. Quantum computers have also found applications to make the encryption process more robust and more secure to prevent hackers from being able to decrypt sensitive data.

However, the only drawback is that quantum com puters are unaffordable. The components required to build such a device and its maintenance is a very costly affair. The sheer cost keeps mass users aloof from this technology, due to which we don’t find mainstream usage of quantum computers and quantum communication yet. Recently research ers from the University of Technology Sydney (UTS) and TMOS, an Australian Research Council Centre of Excellence, have taken the fight against online hackers with a giant leap towards

hackers with a giant leap towards realizing af fordable, accessible quantum communications, a technology that would effectively prevent the de cryption of online activity. Everything from private social media messaging to banking could become more secure due to new technology created with a humble piece of adhesive tape.

future everyday use. The device used to facilitate this is said to comprise quantum emitters at its core.

Earlier, this was created using a complex method and expensive facilities. Now, the same emitters are created using $20 worth of white graphene pressed onto a piece of adhesive tape. This new process involves exfoliating this material with the help of adhesive tape. This process effectively peels off the top layer of the material, creating a flux. Multiple such layers of this flux can be assembled in a 3D structure. This structure can then act as a substitute for systems.

This new research seems intriguing, and it will pave the way for quantum communication to be em ployed in everyday activities by the general public while simultaneously ensuring the security of our data.

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 48
-Shrihari Jhawar Pune Institute of Computer Technology Quantum computing is projected to reach $780 million by 2025.

Chairperson: Onkar Litake

Vice Chairperson: Atharva Naphade

Treasurer: Rohan Pawar

Vice Treasurer: Nandini Patil

Secretary: Aniket Kulkarni Durvesh Malpure Harmandeep Singh

Joint Secretary: Karan Lakhwani Megha Dandapat Tanvi Mane

Secretary of Finance: Saket Gupta

Joint Secretary of Finance: Sejal Jadhav

Public Relations Officer: Manav Majithia

Design Head: Saima Ansari Rohit James Sufiya Sayyed

Technical Head: Neil Deshpande Anupam Patil Suyash More Siddharth Bora Sanket Kulkarni

P.I.N.G. Head: Anushka Mali Paresh Shrikhande Rutuja Kawade

P.I.N.G. Team: Anupam Patil

Neil Deshpande Dhananjay Singh Himanshu Shendge

Samir Hendre

Sangeeta Singh Shreehari Jhawar Shreyas Chandolkar Soham Ravindran

Webmaster: Pranav Mohril Suyash Joshi Abhishek Dhar

Marketing Head: Yash Kale Digital Marketing Pranav Rathi

1 APRIL 2022 ISSUE 17.1 CREDENZ.IN 49 PISB Office Bearers 2021-2022

The PICT IEEE Student Branch (PISB),

proud to announce it’s annual technical festival, Credenz. Credenz offers a forum to showcase individuals technological insights and imaginative power to excited minds.

The goal of Credenz

to offera diverse variety of technical and non-technical events, workshops and seminars to technophiles. P.I.N.G has garnered praise from connoisseurs and experts

different fields since its inception in 2004, thus remaining

of the most highly celebrated events worldwide.

is
is
from
one
www.credenz.in pisbcredenz www.facebook.com/pisb.credenz www.instagram.com/pisbcredenz
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.