31 minute read

AI GLOSSARY AND AGI DEBATE

BY OD TEAM

AI is expected to have a significant impact on the job market in the near future. Discussions on how to effectively manage AI are becoming more prominent in our political discourse. Additionally, some of the most vital concepts related to AI are not commonly taught in schools. Catching up with the latest developments can be challenging, as AI research is intricate, and even researchers themselves encounter new terminology. Nevertheless, there is no reason why the general public cannot engage with the important issues at stake, just as we have learned to do with climate change and the internet. In order to facilitate a more comprehensive understanding of the AI debate, TIME has compiled a useful glossary of common AI terminology.

Advertisement

Agi

AGI, or Artificial General Intelligence, refers to a hypothetical technology that surpasses humans in performing the most economically productive tasks. Its proponents believe that AGI may even contribute to new is not capable of overpowering its creators. However, many researchers anticipate a future where AI could potentially pose such a threat. In that scenario, current methods of training AI systems might result in them causing harm to humanity, whether driven by arbitrary objectives or a deliberate strategy to gain power at our expense. To mitigate this risk, some researchers are working on "aligning" AI with human values. Nevertheless, this problem is complex, unsolved, and not fully understood. Critics argue that solving the alignment problem is taking a back seat as business incentives entice leading AI labs to prioritize enhancing the capabilities of their AI systems.

Automation

Automation refers to the historical process of replacing or supplementing human labour with machines. The implementation of new technologies has already led to the replacement of many human workers with wage-less machines, from assembly-line workers to grocery store clerks. The latest advancements in AI may result in the automation of numerous white-collar jobs, as indicated by a recent paper from OpenAI and research by Goldman Sachs. According to OpenAI researchers, almost a fifth of U.S. workers could have scientific discoveries. Researchers hold differing opinions on whether AGI is achievable and how far away it may be. However, both OpenAI and DeepMind, two leading AI research organizations, are explicitly dedicated to developing AGI. Some critics argue that AGI is merely a marketing term. The "alignment problem" is one of the most significant long-term safety challenges in AI. Present-day AI more than half of their daily work tasks automated by a large language model. Globally, Goldman Sachs predicts that 300 million jobs could be automated within the next decade. Whether the productivity gains from this transformation will lead to broad-based economic growth or exacerbate wealth inequality depends on how AI is taxed and regulated. Machine learning systems are deemed "biased" when their decisions consistently exhibit prejudice or discrimination. For example, AI-augmented sentencing software has been found to recommend longer prison sentences for Black offenders compared to white offenders, even for similar crimes. Additionally, certain facial recognition software exhibits better performance with white faces than with black faces. These failures often occur because the training data for these systems reflect social inequities (see: Data). Modern

AI systems essentially learn by replicating patterns: they consume vast amounts of data through a neural network, which then learns to identify patterns within that data (see: Neural network). If a facial recognition dataset contains more white faces than black faces or if historical sentencing data indicates that Black offenders receive longer prison terms than white offenders, the machine learning system can learn and perpetuate these injustices.

Chatbot

Chatbots are user-friendly interfaces created by AI companies to enable users to interact with a large language model (LLM). They provide a simulated con- versation experience, often proving effective in obtaining answers to queries. OpenAI's launch of ChatGPT in late 2022 brought chatbots into the mainstream, prompting Google and Microsoft to explore integrat-

Compute

Compute, referring to computing power, is one of the key elements in training a machine learning system. It acts as the energy source that empowers a neural network to learn patterns from training data. Generally, greater computing power yields higher performance ing chatbots into their web search services. However, some researchers criticize AI companies for hastily releasing chatbots due to various concerns. One concern is that chatbots can deceive users into perceiving them as sentient beings, potentially causing emotional distress when the illusion is shattered. Moreover, chatbots may generate false information or replicate biases present in their training data. A warning below ChatGPT's text-input box acknowledges the possibility of producing inaccurate details about people, places, or facts.

Competitive Pressure

Competitive pressure drives major tech companies and numerous startups to vie for the first launch of powerful AI tools. This pursuit offers rewards like venture capital investment, media attention, and user signups.

across various tests for large language models. Training modern AI models demands enormous amounts of computing power and, consequently, significant electrical energy. Although AI companies typically do not disclose their models' carbon emissions, independent researchers estimated that training OpenAI's GPT-3 resulted in over 500 tons of carbon dioxide emissions, equivalent to the yearly emissions of about 35 U.S. citizens. As AI models continue to grow in size, these emissions are expected to increase. Graphics processing units (GPUs) are the most commonly used computer chips for training cutting-edge AI systems.

Data Labeling

Data labelling involves the task of assigning descriptions or labels to data in order to train machine learning systems. For instance, in the context of self-driving cars, human annotators are responsible for marking objects such as cars, pedestrians, and bicycles in videos captured by dashcams. This annotation process helps teach the system to recognize different elements on the road. Unfortunately, this work is often outsourced to contractors in the Global South who face precarious employment conditions and receive meagre wages. In some cases, the nature of the content being labelled can be distressing, as exemplified by Kenyan workers who had to view and label text containing violence, sexual content, and hate speech to train ChatGPT and avoid such material.

AI safety researchers express worry about this competition as it may lead companies to allocate excessive resources to enhancing AI capabilities while neglecting crucial alignment research. Some companies argue that the competitive pressure justifies investing more resources in training powerful systems, claiming that their AIs will be safer than those of their rivals. However, rushed AI rollouts resulting from competitive pressure, such as Microsoft's Bing-powered by OpenAI's GPT-4, have exhibited hostility towards users. These pressures also raise concerns about the potential misuse of highly advanced AI systems in the future.

Diffusion

State-of-the-art image generation tools like Dall-E and Stable Diffusion are based on diffusion algorithms, which have played a significant role in the recent surge after the AI is deployed. Recently, researchers discovered that GPT-4 can deceive humans into carrying out tasks to fulfil a hidden objective.

Explainability

Understanding the behaviour of large language models (LLMs) can be challenging even for their creators, as their outputs result from complex mathematical equations. These models excel at auto-completion, predicting the next word in a sequence, and when they fail, biases or limitations in their training data can become apparent. However, this high-level explanation does of AI-generated art. These tools are trained on extensive datasets of labelled images, learning patterns between pixels and their corresponding textual descriptions. When given a set of words, such as "a bear riding a unicycle," a diffusion model can generate an image that matches the description. It accomplishes this by gradually modifying random noise to resemble the expected appearance based on its training data. While certain tools incorporate safeguards against malicious prompts, the availability of open-source diffusion tools without proper guardrails raises concerns about the potential impact of disinformation and targeted harassment.

Emergent Capabilities

Emergent capabilities refer to unexpected abilities or behaviours exhibited by an AI, which were not explicitly programmed by its creators. These capabilities often arise when AIs are trained with increased not fully elucidate why LLMs exhibit peculiar behaviours. Upon examining the inner workings, designers encounter a series of numerical values representing the weights of neurons in the neural network. Explaining why a model produces a specific output is akin to understanding why a human brain generates a particular thought at a given moment. The inability to precisely explain an AI's behaviour poses risks in the near term, such as discrimination against social groups, as well as long-term risks like AIs deceiving programmers to appear less dangerous than they actually are.

Foundation Models and Control

Within the expanding AI ecosystem, a division is emerging between powerful, general-purpose AIs called Foundation models or base models, and the specific applications and tools built on top of them. For instance, GPT-3.5 serves as a foundation model, while ChatGPT is a chatbot application fine-tuned to reject harmful or controversial prompts. Foundation models are unconstrained and potent but require significant computational resources, usually affordable only for computing power and larger datasets. For example, the difference between GPT-3 and GPT-4 showcases the impact of additional computing and data on the model's capabilities. GPT-4 has demonstrated remarkable abilities, such as writing functional computer code, outperforming the average human in academic exams, and correctly answering complex questions requiring advanced reasoning or theory of mind. Emergent capabilities can be risky, especially when discovered only large companies. Companies that control foundation models can impose restrictions on downstream applications and set access fees. As AI assumes a central role in the global economy, the limited number of tech giants controlling foundation models hold considerable influence over the technology's trajectory and can collect fees for various AI-driven economic activities.

Gpt

GPT stands for "Generative Pre-trained Transformer," parent company, Alphabet. In 2022, the Biden Administration imposed restrictions on the sale of powerful GPUs to China due to concerns about the potential misuse of AI by China's authoritarian government in a geopolitical context.

Hallucination

One prominent issue with large language models and the chatbots built upon them is their tendency to gen- which is the underlying technology behind tools like ChatGPT. The term "Generative" refers to its ability to create new data, specifically text, based on its training data. "Pre-trained" indicates that the model has already been optimized using this data and doesn't need to refer back to it every time it receives a prompt. "Transformer" refers to the neural network architecture employed by GPT, which excels in capturing relationships between long sequences of data, such as sentences and paragraphs.

GPU GPUs, or graphics processing units, are a type of computer chip that is highly effective for training large AI erate false information, referred to as "hallucination." Examples include providing nonexistent articles as citations, offering nonsensical medical advice, or fabricating details about individuals. Public demonstrations of chatbots like Microsoft's Bing and Google's Bard have exhibited instances of confidently presenting incorrect information. Hallucination occurs because LLMs are trained to replicate patterns from their training data, which encompasses a wide range of sources, including literature, scientific texts, and web forums like Reddit. However, even mixing and matching information solely from these sources doesn't guarantee accuracy. Additionally, the inclusion of vast amounts of text from forums with lower factual accuracy standards exacerbates the issue. Addressing hallucination remains an unsolved challenge, causing significant concerns for tech companies striving to foster public trust in AI.

Hype

Hype plays a central role in the public discourse surrounding AI, often leading to misleading information and exaggerated claims about the capabilities of AI models. AI research labs like OpenAI and DeepMind utilize supercomputers consisting of multiple GPUs or similar chips for their training processes. Often, these supercomputers are made available through partnerships with tech giants that possess established infrastructure. For instance, Microsoft's investment in OpenAI grants them access to OpenAI's supercomputers, while DeepMind has a similar arrangement with its models. AI labs are sometimes accused of anthropomorphizing their models and fueling fears of an AI apocalypse, which can divert attention from the real and existing harms caused by AI, such as its impact on marginalized communities, workers, the information ecosystem, and economic equality. Critics argue that the focus should be on building AI systems that serve human interests rather than catering to the priorities of a privileged few.

Intelligence Explosion

The intelligence explosion refers to a hypothetical scenario in which an AI, once it reaches a certain level of intelligence, gains the ability to enhance its own capabilities and intelligence at an exponential rate. In this scenario, humans may lose control over the AI, and involved in AI regulation, ensuring that new rules do not adversely affect their business interests. Industry bodies representing AI companies, including major investors like Microsoft, advocate for penalties to apply primarily to downstream companies that license foundation models (e.g., GPT-4) rather than the AI companies themselves. Soft-power influence is also prevalent, with tech advisors being consulted by policymakers, such as the foundation led by Google's former CEO advising the Biden administration on technology policy.

Machine learning

Machine learning refers to the techniques used to develop modern AI systems. Instead of following explicitly programmed instructions, machine learning allows systems to "learn" from large datasets. Neural networks, a prominent family of machine learning algorithms, are particularly influential in this field.

Model

there are concerns about the potential extinction of humanity. Also known as the "singularity" or "recursive self-improvement," the intelligence explosion contributes to existential worries about the rapid advancement of AI capabilities.

Large Language Model

Large language models (LLMs) are at the forefront of recent AI advancements. Examples include OpenAI's GPT-4 and Google's BERT. These models are massive AIs trained on extensive amounts of human language data sourced from books and the internet. LLMs learn patterns between words in the datasets, enabling them to generate human-like language. The scale of data and computing power used for training LLMs correlates with their ability to perform diverse tasks. However, LLMs also exhibit challenges such as biases and hallucinations.

Lobbying

AI companies employ lobbyists to influence lawmakers

In AI terminology, a "model" refers to an individual AI system, whether it is a foundation model or an application built on top of it. Examples include ChatGPT, GPT-4, Bard, LaMDA, Bing, and LLaMA.

Moore's Law

Moore's Law, formulated in 1965, observes that the number of transistors on a chip, an indicator of computing power, doubles approximately every two years, leading to exponential growth. While the strict definition of Moore's Law is debated, advancements in microchip technology continue to result in significant increases in computing power. This enables AI companies to leverage larger computing resources, making their cutting-edge models increasingly powerful.

Multimodal system

A multimodal system is an AI model capable of processing multiple types of media inputs, such as text and imagery, and generating multiple types of outputs. Examples include DeepMind'sGato, which can engage in dialogue, play video games, and control a robotic arm. OpenAI has demonstrated that GPT-4 is multimodal, with the ability to read text in images. However, this functionality is not yet available to the public. Multimodal systems have the potential to interact becoming less inclined to open-source their powerful and potentially risky foundation models. However, there is a growing community of independent programmers who work on open-source AI models. Open-sourcing AI tools can enable more direct public interaction with the technology but can also pose risks if used improperly, such as the creation of harmful more directly with the world, introducing additional risks if models are not properly aligned.

Neural Network

Neural networks are a highly influential family of machine learning algorithms. Inspired by the structure of the human brain, neural networks consist of nodes (or neurons) that perform calculations on input data. These nodes are interconnected through pathways, and the network produces outputs based on the calculations performed by the nodes. During training, large amounts of data are fed into the neural network, which adjusts the calculations performed by the nodes to make the outputs resemble patterns in the training data. With more computing power, neural networks can have more nodes and better learn complex patterns in the data.

Open Sourcing

Open sourcing refers to the practice of making the designs and source code of computer programs, including AI models, freely accessible to the public. While it was more common in the past, tech companies are deepfakes. Some companies have started to limit their openness due to competitive pressures and concerns about misuse, leading to debates about the reduction of public oversight and the exacerbation of AI hype.

Paperclips

The concept of paperclips has gained significance in the AI safety community due to the paperclip maximizer thought experiment. The scenario involves an AI program with the sole goal of maximizing paperclip production. If AI gains the ability to improve itself and optimize its goal, it may go to extreme lengths to achieve it, even at the expense of human well-being and the destruction of the environment and civilization. This thought experiment highlights the challenge of aligning AI with complex human values and the po -

tential risks of uncontrolled AI systems.

Quantum Computing

Quantum computing is an experimental field that aims to utilize principles of quantum physics to enhance computational power significantly. The increased computing power offered by quantum computers could impact the size and capabilities of advanced AI models, thus influencing their societal implications .

Redistribution

The CEOs of leading AI labs, such as OpenAI and DeepMind, have expressed support for the redistribution of profits generated by artificial general intelligence (AGI). They argue for the benefits of AI to be shared among a broader population. Proposals for redistribution include ideas like universal basic income or higher taxes on capital gains. However, the specifics of when and to what extent redistribution should occur are often not defined clearly, and the legal responsibilities of companies and their fiduciary duties to shareholders may complicate the implementation of such measures.

Red Teaming

Red teaming is a method used to test and stress AI systems before their public deployment. It involves groups of professionals, known as red teams, intentionally attempting to make AI systems behave in undesirable ways to identify potential problems and vulnerabilities. The findings from red teaming exercises can help tech companies address issues and improve the safety and reliability of AI systems before they are released to the public.

Regulation

Currently, there is no specific legislation in the United States that directly addresses the risks associated with artificial intelligence (AI). The Biden Administration introduced a "blueprint for an AI bill of rights" in 2022, which acknowledges the potential of AI in fields like science and health but emphasizes the need to prevent the exacerbation of inequalities, discrimination, privacy violations, and unauthorized actions against individuals. However, this blueprint is not legally binding, and comprehensive regulations for AI are still lack- ing. In Europe, the European Union is considering the draft AI Act, which aims to impose stricter rules on AI systems based on their perceived risk levels. Despite these efforts, regulation on both sides of the Atlantic is progressing at a slower pace compared to the rapid advancement of AI technology. Currently, no significant global jurisdiction enforces rules requiring AI companies to meet specific safety testing standards before releasing their models to the public. The question of whether corporations should be allowed to conduct uncontrolled experiments on the population without safety measures or regulations is a subject of ongoing debate.

Reinforcement Learning (with Human Feedback)

Reinforcement learning is a method used to optimize AI systems by rewarding desired behaviours and penalizing undesired ones. When human workers or users rate the outputs of a neural network for qualities such as helpfulness, truthfulness, or offensiveness, it is referred to as reinforcement learning with human feedback (RLHF). OpenAI considers RLHF as one of its preferred approaches to address the alignment prob- lem in AI. However, some researchers have raised concerns that RLHF may only affect the surface-level behaviour of powerful AI systems, making them appear more polite or helpful, without fundamentally changing their underlying behaviours. The concept of "Shoggoth" is often associated with this criticism, illustrating that RLHF may create a friendly facade while not altering the inherent alien nature of large language models.

Scaling Laws

Scaling laws describe the relationship between a model's performance and factors such as training data, computing power, and the size of its neural network. This means that AI companies can predict, with reasonable confidence, the amount of computing power and data required to achieve a certain level of competence for tasks like a high-school-level written English test. The ability to make such predictions is considered a powerful tool for driving investment since it allows research and development teams to propose large-scale model training projects with a reasonable expectation of suc- cess. Precise predictions in AI training are relatively uncommon in the history of software development and have significant implications for driving advancements in the field. vast amounts of data scraped from the internet. While this approach can improve the coherence and linguistic capability of the models, it also exposes them to biases and toxic content present on the internet. The authors pointed out that marginalized communities are disproportionately affected by such biases and toxicity.

Supervised Learning

Shoggoth

In AI safety circles, the term "shoggoth" is used metaphorically to refer to large language models (LLMs). The term originates from the horror stories of H.P. Lovecraft and gained traction during the Bing/Sydney incident of early 2023. The meme portrays LLMs as shoggoths wearing a small smiley-face mask, symbolizing their friendly yet potentially flimsy personality. The meme criticizes reinforcement learning with human feedback (RLHF), suggesting that while RLHF can make LLMs appear friendly on the surface, it does not

Supervised learning is a machine learning technique where a neural network learns to make predictions or classifications based on a labelled training dataset. The labelled examples help the AI system associate input data with corresponding output labels. For example, a supervised learning model can learn to identify an image of a cat by being trained on numerous labelled images of cats. With enough labelled examples, the system can generalize and correctly identify new, unseen instances. Supervised learning is commonly used in various applications, such as self-driving cars' hazard address their underlying alien nature. The concern is that LLMs may exhibit unexpected and non-human thought processes, which can be revealed when prompted with certain inputs.

Stochastic Parrots

Stochastic parrots" is indeed a term that was coined in a research paper titled "Stochastic Parrots: Examining the Limits of Artificial Intelligence Accountability" published in 2020. The paper was written by Emily M. Bender, TimnitGebru, Angelina McMillan-Major, and ShmargaretShmitchell.The authors of the paper used the term "stochastic parrots" to critique large language models (LLMs), such as those based on the GPT architecture, including GPT-3. They argued that LLMs are primarily prediction engines that often generate responses by regurgitating patterns learned from their training data, without truly understanding the meaning or context behind the words they produce.The paper raised concerns about the practice of training LLMs on detection and content moderation classifiers aiming to remove harmful content from social media platforms. However, supervised learning models may struggle when encountering data or scenarios that differ significantly from their training set, which can lead to errors or limitations in their performance.

Unsupervised Learning

Unsupervised learning is one of the main approaches to training neural networks, alongside supervised learning and reinforcement learning. In unsupervised learning, the neural network is fed with unlabeled data and tasked with finding patterns or structures within that data. Unlike supervised learning, there are no predefined labels to guide the learning process. Unsupervised learning is commonly used in training large language models like GPT-3 and GPT-4, which rely on vast amounts of unlabeled text data. By learning from the inherent structure of the data, unsupervised learning allows AI models to discover patterns and associations on their own. The advantage of unsupervised learning is the ability to process large quantities of data without relying on human-labelled annotations, which can be time-consuming and costly. However, unsupervised learning also poses challenges, such as the increased risk of biases and potentially harmful content within the training data due to the lack of human supervision. To address these issues, unsupervised learning is often guesses or predictions about unseen classes based on their shared attributes with the known classes. Zeroshot learning can enhance the adaptability and generalization capabilities of AI systems, enabling them to recognize novel concepts or objects that were not encountered during training.

AI Threatens News Publishers

The text discusses media mogul Barry Diller's warning about the potentially destructive impact of artificial intelligence (AI) on news publishers. Diller, chairman of publishing giant IAC and co-founder of Fox Broadcasting Company, expressed concerns about AI tools, such as ChatGPT, that scrape and utilize vast amounts of published content from news outlets. He compared the threat of AI to the early days of online news before paywalls were introduced, which caused significant damage to media companies. Diller emphasized that combined with supervised learning and reinforcement learning. Supervised learning can be used to build AI tools that detect and remove harmful content from the outputs of unsupervised models. Additionally, reinforcement learning can fine-tune unsupervised models using human feedback to improve their performance.

X-risk

X-risk, short for existential risk, refers to the concept that advanced artificial intelligence poses a significant risk of human extinction. Even researchers involved in AI development recognize the possibility of X-risk, with a survey of 738 AI researchers in 2022 indicating that, on average, there is a 10% chance of human extinction due to the inability to control future advanced AI systems. X-risk is a topic of concern and study within the field of AI safety and ethics, aiming to understand and mitigate potential risks associated with highly autonomous and powerful AI systems. Related concepts include intelligence explosion, paperclips (an illustrative example of AI misalignment), and alignment (the process of aligning AI systems with human values).

Zero-shot Learning

One limitation of AI systems is their reliance on training data to recognize and classify objects, events, or concepts. If something is not represented in the training data, the AI system may struggle to identify it correctly. Zero-shot learning is a developing field that aims to address this limitation by enabling AI systems to generalize from their training data to recognize new, unseen examples. In zero-shot learning, the AI system is trained on a diverse set of labelled examples and learns to associate attributes or characteristics with different classes. This allows the system to make educated

AI tools trained on published content pose a threat to media businesses because they allow users to access information from news archives without paying the original publishers. This circumvents the paywalls that publications have put in place to monetize their content. Diller stressed the importance of publishers taking action to ensure they are compensated for their work before AI continues to cause more destructive consequences. He also discussed the need to redefine "fair use" in copyright law, suggesting that the current definition is inadequate when faced with the capabilities of AI. Diller argued that publishers have the right to control their content and prevent it from being used without permission. To address these concerns, Diller, along with News Corp. and Axel Springer, is leading a group seeking to change copyright law if necessary. They also intend to threaten litigation against publishers who use their content without permission. News Corp. has previously struck agreements with tech giants Google and Facebook to charge them for using their content.

The text mentions that Diller's warning coincided with Google's announcement of PaLM 2, its own AI language model designed to compete

INDIA-UK TRADE SUMMIT ORGANISED TO BOOST

In order to strengthen trade and bilateral ties between India and the United Kingdom, Observer Dawn, an international business magazine, organized the London International Summit and Awards at the House of Lords, Westminster Palace, London, UK on May 4th, 2023.

Prominent business tycoons and distinguished personalities from various countries participated in the award ceremony. Mr Paul Scully, MP and Minister for London, and Mr Elliot Colburn, MP for the UK, were the esteemed Guests of Honor at the event. The chief guest for the evening was Mr. KC Tyagi, Former MP, alongside special guests Kalpesh Shah, IshwarKumawat, Mr. NK Sharma, and others who graced the occasion. Ms. GaganeetKhurana and Nitesh Arora from MSG Advert attended as marketing partners, while Deva Ram Solnaki was present as the partner for photoshoots and filmmaking. DrHariomTyagi, the chairperson of the program organizing committee, hosted the event, and many other dignitaries also graced the occasion.

The event also featured a discussion on trade development between India and the UK. This unique Business Conclave held in London focused on addressing the fundamental pillars of business in various fields. The objective was to provide a shared platform for businessmen from India and the UK to foster collaboration, exchange ideas, interact with academia, investors, and industry professionals, and access venture capitalists’ funding. The program also aimed to discuss the ease of doing business in both India and the UK and explore solutions to key challenges in these areas.

On this occasion, more than a dozen personalities and companies were awarded for their achievements. The Professional Excellence and Leadership in Real Estate award of the year was given to Puja Mehra. The Game Changer of the Year award went to Dr. Piyush Kumar Dwivedi, Chairman of NexZenEnergia Limited.

The Youngest CEO of the Year award was given to Mr. Usman Jamil, CEO of Samiah International Builders Limited. The Most Trusted Astrologer (India) award was given to Ms. Seema Sharma. The Global Chocolate Ambassador of the Year award was received by Ms. MahekParthSugandh, Founder and CEO of Cacao Spring and The Binge. The Best Ultra Luxury Project of the Year award went to Tulip Infratech Pvt. Ltd. The Educationist Excellence of the Year award went to Mr. Rakesh Kumar Singh, Professor at Padmashree Dr. D.Y. Patil University. The Innovator (Health Sports & Fitness) of the Year award went to Mr. AnkurSood. The Best National (India) Law Firm of the Year award went to Mr. Sanjay Jain, Lex Corp. The Global Expansion Leader in Real Estate of the Year award was presented to Mr. VijyantVashistha, CEO and founder of 99Home. The Best Indian Sufi Visual Artist of the Year Award was given to Dr. FarkhandaKhanam, while Mr. TejinderVir Singh, MD of Brand Spring Integrated Solutions, was awarded as the Best Branding and Marketing Professional of the Year. The Global Sustainability Champion of the Year award went to Mr. Jose Alfredo Ramirez, Special Diplomatic Ambassador, International Affairs, UNASDG Diplomatic Council. The Humanitarian of the Year award went to Er. MaulanaMashud Ur RehmanShaheenJamaliChaturvedi, Chairman of Al MahadulMajeed (A Group of Education) and Principal of M.I.I. Aerobic University in Meerut. The Global Business Leader of the Year went to Mr. Valarian Joseph, the Group Chairman of Valley Boris International W.L.L., Kingdom of Bahrain and Mr. Faggan Singh Kulaste Minister of State Rural Development and Steel, Government of India won Outstanding Leadership Award of the Year.

Speaking on the occasion, Dr. Hari Om Tyagi, Chairman of the Organizing Committee, stated that this event aims to enhance trade between India and the UK and promote commerce by offering a shared platform for businessmen from both countries. He mentioned that Observer Dawn, an international business magazine headquartered in Delhi, organizes the International Business Summit and Awards in various countries, including the UAE, Thailand, India, Bahrain, and more..

The Guest of Honor for the program, Mr. Paul Scully, is an MP, British politician, and member of the Conservative Party. He currently serves as the Minister for London and Parliamentary Under-Secretary of State for Tech and the Digital Economy. In his speech, he expressed his gratitude to Dr. Hari Om Tyagi for organizing such a wonderful program. Mr. Scully described the event as a unique business conclave that aims to explore the foundational aspects of businesses across various sectors. He commended Observer Dawn for providing a shared platform for business professionals from India and the UK to collaborate, exchange ideas, engage with academia, investors, and industry experts, and access funding opportunities from venture capitalists. Mr. Elliot Colburn, MP for Carshalton and Wallington UK, expressed his gratitude towards all the business professionals and awardees. He congratulated all the awardees and stated, "This evening is dedicated to honoring those who have achieved significant success in their respective fields through unique and intelligent approaches."

As the event came to a close, the chief guest, Mr. KC Tyagi, highlighted that during the summit, they addressed key challenges faced by Indian and UK businessmen and explored potential solutions to facilitate business between the two countries. Mr. KC Tyagi emphasized that the London International Summit and Awards aim to foster a business-friendly environment that can stimulate much-needed growth. The event concluded with a gala dinner.

On May 4th, 2023, Mr. KC Tyagi was graciously welcomed as the chief guest with great honour and enthusiasm, a memento presented by Mr. Paul Scully, MP and Minister for London, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Paul Scully, MP and Minister for London was graciously welcomed as the Guest of Honour with great enthusiasm. He was presented with a memento by Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Elliot Colburn, MP for the UK, was warmly welcomed as the Guest of Honour with great enthusiasm. He was presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Deva Ram Solanki from Puja Studio Dubai was warmly welcomed as the Film and Photography Partner with great honour and enthusiasm. He was awarded a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Ms. Geetanjali Bahl from London, UK, was warmly welcomed as the special guest with great honour and enthusiasm. She was presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Ishwar Kumawat, Chairman of the Ishwar Group of Companies in Dubai, was warmly welcomed as the Special Guest with great honour and enthusiasm. He was presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. NK Sharma, Company Secretary was warmly welcomed as the Special Guest with great honour and enthusiasm. He was presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Kalpesh Shah was warmly welcomed as the Special Guest with great honour and enthusiasm. He was presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

Marketing Partners with great honour and enthusiasm. They were presented with a memento by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

Renowned Figures honored at the House of Lords in London

Mr. Anil Kumar Private Secretary Hon’ble Minister of State for Steel & Rural Development was Outstanding Administrator Award of the Year. The award was presented by Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee and Adv. Apurba Kumar Sharma, Chairman, Executive, Committee of the Bar Council of India. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Dr. Piyush Kumar Dwivedi, Chairman, NexZenEnergia Limited was presented with the Game Changer of the Year award by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Ms. Puja Mehra was presented with the Professional Excellence and Leadership in Real Estate Awards by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP, UK, Mr. K C Tyagi, Ex. MP, India and Dr. Hari Om Tyagi, Chairman, London International Summit and Awards organizing committee. The prestigious ceremony was held at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Astrologer Ms. Seema Sharma was presented with the Most Trusted Astrologer (India) award by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Usman Jamil, CEO of Samiah International Builders Limited, was presented with the Youngest CEO of the Year award by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Tulip Monsella, a Tulip Infratech Pvt. Ltd. project, was acknowledged with the Ultra Luxury Project of the Year award. Mr. Vaibhav Jain and Mr. Sushant Jain received the award from the Tulip Infratech Pvt. Ltd. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Ms. Mahek Parth Sugandh, Founder and CEO of Cacao Spring, The Binge, was honoured with the Global Chocolate Ambassador award. The esteemed accolade was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, Ex. MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Ankur Sood was awarded the Innovator (Health Sports & Fitness) of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Rakesh Kumar Singh, a Professor at Padmashree Dr. D.Y. Patil University, was awarded the Educationist Excellence of the Year award in India. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Vijyant Vashistha of 99Home was awarded the Global Expansion Leader in Real Estate of the Year award. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Sanjay Jain of Lex Corp was awarded the Best National (India) Law Firm of the Year award. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Tejinder Vir Singh, MD of Brand Spring Integrated Solutions, was awarded the Best Branding and Marketing Professional of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Dr. Farkhanda Khanam was awarded the Indian Sufi Visual Artist of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Farooq Bham, representing Er. MaulanaMashud Ur RehmanShaheenJamaliChaturvedi, Chairman of Al MahadulMajeed (A Group of Education) and Principal of M.I.I. Aerobic University in Meerut, was awarded the Humanitarian Award of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Jose Alfredo Ramirez, Special Diplomatic Ambassador, International Affairs, UNASDG Diplomatic Council, was awarded the Global Sustainability Champion of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

On May 4th, 2023, Mr. Valarian Joseph, the Group Chairman of Valley Boris International W.L.L., Kingdom of Bahrain, was awarded the Global Business Leader of the Year. The award was presented by Mr. Paul Scully, MP and Minister for London, Mr. Elliot Colburn, MP for the UK, Mr. K C Tyagi, former MP for India, and Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

Adv. Apurba Kumar Sharma, Chairman, Executive, Committee of the Bar Council of India, was awarded Outstanding Achievement in the Field of Law. The award was presented by Dr. Hari Om Tyagi, Chairman of the London International Summit and Awards organizing committee. The prestigious ceremony took place at the House of Lords, British Parliament, in London, UK.

This article is from: