24 minute read

Page

IT’S RAINING ROCKETS!!

If left unattended, the piling up of space debris could threaten our future in space and much more.

Advertisement

SYNERGIA FOUNDATION

RESEARCH TEAM

Space has been hailed as the ‘New Frontier’ that will give a fresh lease to human endeavours, as planet Earth lies disabused and wasted. But, mankind’s insatiable greed and the mad race to be the first exploiters in any new exploration is fast threatening to suffer space the same fate as our planet. Having trashed our planet for centuries, mankind is doing the same to space.

Since the beginning of the 20th century, objects have been lobbed into orbit with ever-increasing frequency. From that fateful day in 1949 when the ‘Bumper Wac’ became the first artificially created object to be shot out of Earth’s surface to an altitude of nearly 250 miles, riding atop a captured German V-2 rocket, followed by the Soviet Sputnik (1957), to the launch of the Artemis I propelled by the most powerful rocket designed till date, which made a splash down in the Pacific in December, the human race has scarcely taken a break in trying to conquer space.

Unsurprisingly, the space that surrounds Earth is becoming increasingly cluttered with our debris. In addition to the multitudes of shards too small to be seen, thousands of space debris pieces are currently being monitored and catalogued, prompting concerns about the sustainability and security of space travel. A few retired satellites can be seen in clear skies with the unaided human eye, which is only a tiny portion of the numerous space junk objects clogging up Earth’s orbit. pleted their useful lives threaten to exceed the functional platforms. In addition, there are other pieces/ fragments like micro debris (paint flakes, detritus from burnt-out rocket motors etc.) amounting to nearly 25000 and counting. If the minuscule pieces are added, the number could go up to millions! The danger posed to space utilisation, interplanetary travel and Earth’s inhabitants themselves can well be imagined.

The ISRO System for Safe and Sustainable Space Operations Management (IS4OM) has been put into operation to protect Indian space assets from environmental risks in space, to carry out associated R&D projects, and to help spread knowledge about the long-term sustainability of space activities.

THE SPACE RACE

The rivalry between the U.S. and the USSR had intruded into space in the 1950s when one Soviet success after another set alarm bells ringing in the U.S. Prophets of doom proclaimed that once the Soviets had seized the mastery of space, they would lose no time to exploit it to lob atomic bombs into America from space. The fear reached paranoia levels in 1957 when Sputnik I went into orbit, beating the U.S. once again.

The U.S. military had no system to monitor what was happening in space, and in 1961 the ambitious Project Space Track was declared operational by the U.S. Air Force, which used radar, optical instruments, radio, and visual sightings to keep track of space objects. Space Track worked in conjunction

with the top-secret military-run Space Detection and Tracking System (SPADATS) under the North American Air Defence Command (NORAD), entrusted with the security of the U.S. from incoming Soviet nuclear-tipped ballistic missiles. Space Track regularly shared unclassified data with friendly countries.

The need to be vigilant and academic interest in dealing with space objects/ debris continues to drive the need for keeping an eye on the skies even today. Project Space Track was equipped with its first computer (an IBM 680) as early as 1958, exponentially enhancing its capability to detect, track, record and analyse space-bound objects. When a Thor-Ablestar rocket upper stage burst on June 29, 1961, the amount of data in this database nearly tripled! The occurrence resulted in over 200 catalogued bits and was the first satellite breakup.

A WORSENING MALICE

In 1978, NASA scientists Don Kessler and Burton Cour-Palais formulated the landmark Kessler Syndrome. The theory states that as the density of space rubbish increases, a cascading, self-sustaining runaway cycle of debris-generating collisions can arise that might ultimately make lowEarth orbit too hazardous to support most space activities. Their efforts and increased space activity led to a greater focus on the debris problem.

Anti-Satellite (ASAT) tests are among the most significant single events in terms of breakups, which historically have contributed the most to the population of fragmented space debris. Explosive divisions or ASAT tests may produce millions of deadly but untraceable particles. For instance, the Pegasus/HAPS breakdown in 1996 had more than 750 trackable fragments, whereas the Ariane 1 breakup in 1986 produced around 500 trackable parts. Models predict tens to hundreds of non-trackable components for every trackable fragment and that the quantity of this debris type rises as fragment size decreases.

Fortunately, only four countries have ever conducted ASAT tests-the U.S., USSR, China and India. The worldwide protests generated after the 2019 Indian ASAT tests were understandable, and hopefully, these will discourage other aspirants of ASAT tests. Although it promises significant technological advancement, projects such as SpaceX’s Starlink and Amazon’s Project Kuiper, which is looking to launch a mega-constellation of around 3,200 satellites, pose a dangerous threat in space.

FINALLY, REALISATION DAWNS!

A new space race has recently begun worldwide - the one to find a solution for the ever-increasing amount of space junk. India has implemented the necessary measures to control the escalating amount of orbital junk, such as abandoned rocket stages and satellites, in low earth orbit.

The methods include facilities for the surveillance and observation of space objects, as well as best practices, including the passivation of launch vehicle upper stages, conjunction evaluation, and satellite collision avoidance. The ISRO System for Safe and Sustainable Space Opera-

tions Management (IS4OM) has been put into operation to protect Indian space assets from environmental risks in space, to carry out associated R&D projects, and to help spread knowledge about the long-term sustainability of space activities.

Chinese aerospace scientists have developed a method to utilise a sizable “sail” to deorbit spacecraft at the end of their useful lives to combat the problem of space debris. Scientists on space missions have already tested the technology. The most recent instance was the launch of three satellites on June 23 by a Long March-2D carrier rocket in southwest China. Three days later, the rocket’s deorbiting sail opened. According to the Shanghai Academy of Spaceflight Technology, which created the gadget, this was the first time a large orbiting device had ever been launched in this manner. In contrast to conventional space trash removal techniques like robotic arms, tethers, and nets, the de-orbiter can lessen space garbage without using more fuel.

Two companies in the UK are working on technology to find and seize the increasing number of abandoned satellites orbiting the Earth. COSMIC, also known as Cleaning Outer Space Mission through Innovative Capture, is one of the landmark initiatives undertaken by the government of the United Kingdom to accelerate the cleaning up of space debris.

The lack of sustainable and environment-friendly options available to space mission developers inhibits space debris removal. Further, there are innovative solutions such as quantum-inspired space debris removal, which combine Artificial Intelligence and quantum-inspired computing to accelerate the process of space debris removal. This is done by simulating and developing multi-debris mission events where only the right pieces of debris are removed from a pool of millions.

One of the significant issues faced in space debris is removing the most fraught pieces out of orbits in use. One potential space debris removal method includes moving objects around with a powerful laser beam. Other companies, such as AstroScale, have undertaken the ambitious task of latching a particular satellite onto a piece of debris and deorbiting both parts. This could be considered the better alternative as scientists suggest it consumes less fuel and gets the job done faster and more efficiently.

Assessment

There should be no further delay in implementing measures to prevent the nightmarish Kessler Syndrome from being set into motion.

Climate change contributes to the increased risk of collisions by space debris through long-term drops in upper atmosphere density. It is not too late to control the damage and keep the upper atmosphere as a usable resource for times to come, provided a collaborative global effort is put into action.

One of the significant issues in harmonious space debris removal is manoeuvring the international diplomatic space without stepping on anyone’s toes. Although helpful, the Outer Space Treaty of 1967 remains vague and outdated; therefore, there is a need for more explicit rules on space debris removal and each country’s role in the process.

MASTERING THE MACHINES

Without guardrails to regulate it, AI’s role in our lives can lead to disorientation and disruptions, not always to our benefit.

SYNERGIA FOUNDATION

RESEARCH TEAM

The artificial intelligence (AI) sector is expanding incredibly quickly, and the competition between nations to win the AI race has become sharper. According to simulations, by 2030, nearly 70 per cent of businesses will be using AI technology. It is easy to understand why; AI could substitute humans in making judgments more quickly and affordably, whether it is modelling climate change, choosing job candidates, or determining the affinity towards crime in humans. It is like the Hollywood blockbuster starring Tom Cruise, “The Minority Report,” coming to life!

However, AI comes with its own burden of woes; for instance, algorithms controlling social media content might unfairly censor free speech and shape public discourse. Mass biometric surveillance techniques undermine our right to privacy and reduce civic engagement. Massive collections of personal data, whose extraction, processing, and maintenance frequently infringe on our data protection rights, are used by algorithms.

Algorithmic prejudice can exacerbate existing inequalities in our societies and alienate and discriminate against targeted groups. Hiring algorithms are an example since they are likely to favour men over women and participate in racial prejudices because of the data they are given, indicating that successful candidates are frequently white men. Regulations could transform how we use Artificial Intelligence. It must first outlaw technology, like predictive policing systems and mass biometric surveillance, that violates our fundamental rights. Any exceptions to the ban that permit businesses or government entities to use them under certain circumstances could hinder further development.

THE RACE TO REGULATE

The risks associated with AI are finally dawning on policymakers and civil society, and there is a movement to bring in regulations at the industrial, national, and regional levels. This must be done soon before AI, like the internet, becomes too large to be beyond any control. The EU AI Act, initially released in April 2021, states that its goal is to ensure AI applications uphold human rights and reflect EU values. The law categorises AI applications into four risk categories: minimal risk, low risk, high risk, and unacceptable risk. Systems assessed to represent little or no risk can be employed without restriction.

The EU even cites spam filters and video games with AI as examples of this technology! Notably, the EU AI Act is designed to change along with the dynamic nature of AI technology. Further, the UK Government has established a 10-year National AI Strategy for advancing the technology within its borders,

even though it still needs to publish a legislative framework. The UK government describes its goal as making the nation “the ideal place to live and work with AI”, with clear rules, applicable ethical values, and a pro-innovation regulatory framework. A roadmap to a robust AI assurance ecosystem served as the UK’s first significant move toward becoming a global voice of authority on AI legislation.

Additionally, nations have passed national AI laws and frameworks, such as Singapore’s AI Governance Framework and Canada’s privacy laws governing the development of AI systems.

THE REGULATORY LANDSCAPE

Numerous regulations are under development, and to further complicate matters, each has a different target audience and geographic or industry reach—some focus on risk, others on transparency, others on privacy, etc. Given the enormous diversity of potential AI uses and their impact, this complexity is to be expected. AI regulation could transform how we use Artificial Intelligence. It must first outlaw technology, like predictive policing systems and mass biometric surveillance, that violates our fundamental rights. Any exceptions to the ban that permit businesses or government entities to use them under certain circumstances could hinder further development.

Second, precise guidelines outlining what information businesses must publicly disclose regarding their goods can be helpful by allowing companies to give a thorough explanation of the AI system in question. People exposed to AI must be told about it, as is the case with recruiting algorithms, for instance. Systems that have the potential to affect people’s lives significantly should be given extra scrutiny and included in a database that is open to the public. Researchers and journalists find verifying that organisations and governments are properly defending our freedoms is simpler.

Third, when there are issues, people and organisations that defend consumers need to be capable of holding governments and businesses accountable. It is necessary to modify current accountability laws to consider that algorithms, not users, make decisions. Fourth, new regulations must ensure that there is some person or an organisation to ensure that businesses and the government are correctly adhering to the rules. This overseer should be impartial and endowed with the tools and authority necessary to carry out its duties.

Lastly, AI regulation should include measures to protect the weakest and establish a process that enables victims of AI system injury to file a claim and receive compensation. Additionally, employees should be free to protest intrusive AI systems utilised by their businesses without fear of reprisals.

THE WAY AHEAD

Considering the volume of global activity in AI, it is difficult to accurately predict how things will look in the future, especially given how quickly technology is developing. Almost all applications of AI will require regulation of some form.

But does that imply that regulation is needed right away? Take, for example, the intrusion of AI into medical technology. Already subject to the law by profession, the medical fraternity will have to formulate new regulations progressively when new AI technologies are implemented. Similarly, other high-risk applications of AI, such as self-driving automobiles, will also need new rules. Thus, existing regulations may only offer helpful guidance for where to concentrate the regulatory effort.

Governments urgently need to develop comprehensive, specialised AI policies for technologies employed in public and private contexts. Regulators have mainly relied on common anti-discrimination laws to address biased outcomes. The concern is growing over the safety of utilising AI-enabled tools designed for one population to judge other people. This anomaly is due to the inherent opacity of the intricate programming that underlies machine learning.

Assessment

Transparency will allow the existing legal and regulatory system to create at least satisfactory solutions for controlling AI. This is a better alternative as it offers adequate incentives for consumers to demand and manufacturers to develop the openness of AI decision-making. The long-term effects of a wait-and-see strategy are better than those of hasty regulation based on, at best, an incomplete grasp of what has to be regulated.

Companies risk undermining customer and societal trust in AI-enabled products and sparking unnecessarily stringent regulation if they don’t address these issues early on. This would hurt corporate earnings and AI’s potential benefits for consumers and society.

AI literacy must be imparted to the coming generation for all of it to be utilised for the larger benefit of humanity.

TOWARDS A GREENER AI FUTURE!!

Technologies like Artificial Intelligence (AI) and Machine Learning (ML), expected to proliferate widely, must be evaluated for their impact on the environment before it is too late.

SYNERGIA FOUNDATION

RESEARCH TEAM

For the past few decades, carbon emissions from cars have been a political and societal concern; manufacturers are required to report, there is government regulation, and a wealth of research goes along with it. Even if these moves came almost half a century late, they would reflect the recognition of the threat to our environment by manmade machines.

Since devices running on artificial intelligence (AI) are predicted to become ubiquitous in our daily existence, a similar strategy to control their impact on the environment is a prerequisite. According to Felix Creutzig, leader of the MCC working group Land Use, Infrastructure, and Transport, “AI is analogous to a hammer in terms of its impact: it may accomplish wonderful things, but it can also shatter a lot.”

DECODING THE THREAT

AI’s environmental effects can be examined from three different perspectives. The first is the system-level products of AI through structural reform, including the direct impacts, such as the carbon emissions from the procedure of end-user devices, servers, and data centres for AI development and use. Second is the immediate effects of specific AI applications on greenhouse gas emissions in various areas of daily life, the economic system, and lifestyle changes. Thirdly, AI applications’ direct and systemic impact can positively or negatively affect the climate.

The size of Machine Learning Models is increasing dramatically, and they need exponentially more energy to train them to process images, text, or video accurately. Some conferences now request submissions of papers to include information on CO2 emissions as the AI community struggles with its environmental impact.

A new study proposes a way for quantifying those emissions that is more precise. Researchers from Stanford, Facebook AI Research, and McGill University have now developed a simple tool that quickly calculates how much electricity a machine learning project would need and what that translates to in terms of cost. Once aware of the energy inputs, users can calibrate the usage to moderate energy consumption.

Machine learning systems have the potential to considerably increase carbon emissions as they become more pervasive and resource intensive. The Massachusetts Institute of Technology estimates that the global tech industry is responsible for 1.8 per cent to 3.9 per cent of all greenhouse gas emissions.

come more pervasive and resource intensive. The Massachusetts Institute of Technology estimates that the global tech industry is responsible for 1.8 per cent to 3.9 per cent of all greenhouse gas emissions. While AI and machine learning are only responsible for a small portion of those emissions, AI has a very high carbon footprint compared to other tech fields. However, if you cannot measure an issue, you can’t solve it.

Such technology can aid scientists and engineers in understanding how carbon-efficient their job is and even spark suggestions for lowering their carbon impact. CodeCarbon is an opensource project that calculates the ecological footprint of computing, particularly the energy used by independently managed data centres and the infrastructure components from cloud services, according to new research by BCG GAMMA and others. The project aims to guide data scientists in choosing more environmentally friendly computing options. Additionally, it aids in code optimisation.

The researchers started by calculating the power usage of a particular AI model to gain a precise estimate of what that entails for carbon emissions. That is trickier than it sounds because each training session needs to be separated from the others. After all, a single machine frequently teaches multiple models at once. Additionally, each training session uses energy for shared overhead tasks like cooling and data storage, which must be handled appropriately.

The next stage is to convert energy use into carbon emissions, which depend on the proportion of fossil and renewable fuels used to generate electricity. Depending on the location and the time of day, that mixture varies greatly. For instance, where there is a lot of solar energy, the carbon intensity of electricity decreases as the sun rises higher in the sky.

The researchers combed through open data sources about the energy mix in various parts of the United States and the rest of the world to obtain that information. Moreover, the carbon emissions from an AI training session vary depending on where it is held. According to the researchers, operating a session in Estonia, which heavily relies on shale oil, will produce 30 times as much carbon as running the same session in Quebec, which heavily depends on hydroelectricity!

Source : Arctic circle comics FINDING THE RIGHT SOLUTIONS…….

AI appears poised to have two roles. On the one hand, it can aid in mitigating the effects of the climate problem, such as in the development of smart grids, the creation of low-emission facilities, and the simulation of climate change projections. AI, however, is a significant carbon emitter in and of itself.

However, there are some “fast wins” that every AI practitioner should consider to lessen their work’s carbon footprint. Increasing transparency and measurement of this issue is a crucial first step. When AI researchers publish results for new models, data on the amount of energy used in model development should be included with performance and accuracy measurements.

This suggests that researchers should plot energy costs versus performance gains while training models as a best practice. In light of diminishing returns, researchers will be prompted to make more informed, reasonable allocations of resources if this trade-off is explicitly quantified. Ultimately, the community should consider efficiency metrics like these when assessing AI research as sustainable.

Other low-hanging fruit, such as adopting more effective hyperparameter search strategies, cutting back on pointless conditioning trials, and using more energy-efficient equipment, can help lower AI’s environmental impact in the short term. However, these corrective measures are insufficient to resolve the issue independently. To make protracted progress, artificial intelligence must undergo a more fundamental change.

CONCLUSION

The artificial intelligence community has to start working on alternative paradigms that don’t demand absurdly high energy costs or exponentially expanding datasets. Promising directions include newly developing research fields like few-shot learning. We must realise that the road to universal intelligence differs from endlessly expanding neural networks. We must push ourselves to find more sophisticated, practical approaches to model artificial intelligence from the ground up. It is essential to our continuous fight against climate change and, by extension, to the survival of our planet.

TECHNOLOGY: CHAOS VERSUS ORDER

How can we ensure that digital does not become the ‘dictator’ of the future but rather an enabler of collaboration?

This article is based on a discussion between Synergia Foundation and the Danish Tech Ambassador

Innovation and disruptive ideas happen at the edge of Order and Chaos. Order is everything structured, and the way the Chinese government runs almost everything could be taken as an example. On the contrary, too much order can kill the innovative spirit because no one wants to take risks.

The digital world makes order extremely easy and scalable, much like the point system in China, where a digital app keeps a comprehensive social score that will one day cover the behaviour of every member of Chinese society. In its extreme form, the digital world can become a ‘golden cage’ that can kill innovation.

DISRUPTERS AND ENABLERS

The true picture lies somewhere between complete chaos and total order, mostly a mix of both. Take the biggest technological breakthroughs of the 20th century; they have disrupted our markets and transformed our societies and states. Entrepreneurial states have brought about the majority of such innovations. The iPhone is a classic example, a combination of a wonderful design married to great marketing techniques. An organ of the state like NASA has also been at the forefront of innovation with systems like the GPS. Many of these wonderful devices we take for granted today were created on the frontier between chaos and order. dor, Mrs Anne Marie Engtoft Larsen, says, “Allowing chaotic researchers, chaotic scientists to think abstractly, to be curious about the world beyond and not necessarily be met by metrics of having to invent and create a product in an 18-month life cycle because then you got to start your next VC round of investment, is what leads to meaningful innovation.” The challenge lies in supporting the research and development of digital technologies and being prepared to deal with the market chaos and disruptions that come when we create disruptive technologies.

The best innovations come in a chaotic, vibrant innovation ecosystem of entrepreneurs who are willing to take a risk and wait and see how to monetise and create products and services on the back of General Purpose Technologies (GPT) that can serve individuals, societies, groups, and communities.

The challenge lies in supporting the research and development of digital technologies and being prepared to deal with the market chaos and disruptions that come when we create disruptive technologies. The best innovations come in a chaotic, vibrant innovation ecosystem of entrepreneurs who are willing to take a risk and wait and see how to monetise and create products and services on the back of General Purpose Technologies (GPT) that can serve individuals, societies, groups and communities.

Quantum Computing is a buzzword that conjures images of ultimate computing power that will change the world. The lead that China is reputed to have gained in this field has sent a shiver down the collective spine of the West, spurring them into a technological contest of Thucydidian magnitude!

But the threat of untold disruptions is very real. For the past hundred years, quantum technology has largely been theoretical, but with immense investment, there have been definite results in China and the U.S. Even start-ups are jumping into the field, with two being in Denmark. An ecosystem is being set up at different places in the world with a vibrant private sector coalescing around them. In fact, some of the core engagers are in Bangalore. If there is too much of an authoritarian approach, you lose the risk-taking, optimism, and, sometimes, the slight dare-devilish behaviour that brings out the best in young innovators.

In the wave of technological buildup over the past 20 years, there’s been a bit of chaos in the “move fast and break things” ethos of Silicon Valley. Many innovations, especially from the stables of Google and Facebook, have come out of a chaotic approach.

But in this mad race to innovate, citizens’ rights and safety have been largely de-prioritised. This must be corrected to ensure a much higher degree of safety standards, and data privacy, among others. Therefore, in the next wave of technological innovation, businesses that consider safety an integral feature will win. Making safe, resilient, and reliable products will be more important going forward than the kind of totally chaotic innovation that was the norm for the last two decades.

CHALLENGES TO INNOVATION

The first challenge is the pace of change. Before the digital revolution, changes were generational every 30 to 40 years. Today, the pace of change is incredible, but the supporting ecosystem cannot keep pace; can governments be responsive enough, and can our education system and institutions support it? The second challenge is the global coupling of supply chains, even in high-tech areas like semiconductors. The age of decoupling is over for innovation; we need to integrate, but in today’s fractured world, with economic sanctions and trade wars, that will be a serious impediment. The third challenge is creating a communication channel between innovators and the public to ensure that societal and individual needs coincide with the innovator’s efforts. The requirements must be explained to the technologist in the language they understand. Another challenge is making technology democratic, sensitive to mankind’s needs and responsible for its impact on society. In the past, we viewed technology as an enabler towards a better life. We never associated it with these three keywords as technology was considered sterile, neutral, and did not have a gender, political, or religious orientation. All these assumptions seem to be proving obsolete now. Will the governments permit control over key technology to pass to society to make it more democratic?

These questions have no clear-cut answers at this juncture; we will have to navigate through them as best as we can and learn along the way. The positive aspect is that every major company, country, and government take it seriously and understand how it’s affecting the world.

At the current pace at which technology is growing, regulations and global regulatory bodies do not understand it well enough to bring about policy changes at the right point in time, especially with technologies like drones. Technology is a double-edged sword that, in the wrong hands, can produce major impacts, not all positive. Therefore, regulatory bodies should have the right inputs very early as technology, or a particular technology, becomes prevalent very quickly. Regulatory bodies need to increase the pace at which they intervene.

FINDING THE PATH AHEAD

The question of ethics, especially about ethical AI, is becoming increasingly relevant; data science models should not be biased against any race, community, or gender. There is a moral component which can only be done through diplomacy. Therefore, soft diplomacy seems to be the only way forward. AI and autonomous weapon systems make a lethal combination and create a dangerous world. We can set up a dialogue with technology developers that will ultimately help develop technologies that benefit mankind.

It is up to us how to have these conversations, which is why collaboration between the innovators, the technologists and the regulators is so central. We are at a critical juncture for democracies, and strong partnerships between Europe, India, South Africa, Indonesia, Kenya and hundreds of other countries trying to live up to democratic ideals can strengthen democracy so that we use technology to enable social, economic and civil liberties.

This article is from: