Technology Update
In Brief
Q2 2024

In Brief
Q2 2024
Welcome to the latest edition of our Technology Sector Update - In Brief series. In this interactive magazine, we feature our most popular insights and events from the past month. We hope you find it informative.
Content includes our flagship conference on Technology and Disruption plus our recent Data Privacy masterclass
You can also read our most popular articles from the last quarter:
• European Data Protection Board Clarifies the One Stop Shop Test
• Potential Liability for Chatbot Hallucinations?
• New CCPC Guidelines for Social Media Influencer Advertising
• Artificial Intelligence and the Impact on HR Practices
• Cyber/Data Breach Annual Review 2023
Please contact us or any member of our team if you wish to discuss these topics or any other issues impacting your organisation.
Oisín Tobin Partner, Technology Sector Lead otobin@mhc.ie
Philip Nolan Chair & Partner, Head of Technology pnolan@mhc.ie
Contact our Technology Sector Team
We held our Annual Technology Conference: Technology and Digital Disruption in the Marker Hotel. Our MC was Digital Entrepreneur, journalist and COO of Kinzen, Áine Kerr.
Watch an exclusive highlight from her interview with Partner and data privacy expert, Oisin Tobin, Mason Hayes & Curran. You can also watch all the panels, which covered Fintech, Scaling Companies and AI on our website.
We also hosted our Data Privacy and Emerging Technology Regulation Masterclass for inhouse lawyers. The event focused on three key areas, which you can now watch and gain CPD points on demand.
• Data Privacy Trends
• New Technology Regulation
• Enforcement of Technology Regulation
Due to time constraints our events only focus on key trends and highlights, but if you would like to learn more please contact our team.
The European Data Protection Board (EDPB) issued an opinion on 13 February 2024 clarifying the one-stop-shop (OSS) test and, in particular, what is a “main establishment” under Article 4(16) of the GDPR. The opinion is relevant to any existing businesses with cross-border activities in the EU as well as those considering establishing themselves in the EU. It highlights the need to put in place robust decision-making and governance frameworks to benefit from OSS.
While the EDPB’s opinion is broadly consistent with prior guidelines, it is however debatable whether its position is correct.
If an organisation has more than one EU establishment, identifying its “main establishment” is critical as this determines its “lead supervisory authority” (LSA) under the OSS test (Article 56). The LSA is the controller’s sole interlocutor and leads all inquiries into the controller’s EEA data processing activity. This means other EU supervisory authorities (SAs) cannot directly regulate the controller, except in limited circumstances.
If a controller has more than one establishment in the EU, its “main” establishment is the place of its central administration in the EU. This is the case “unless” another one of its EU establishments takes “the decisions on the purposes and means of the
processing of personal data” and “has the power to have such decisions implemented” (Article 4(16)).
The EDPB has repeatedly made clear that the GDPR does not permit “forum shopping” and the identification of the “main establishment” must be based on objective criteria.
• An organisation’s place of central administration in the EU, e.g. regional headquarters, can be considered as a “main establishment” for the OSS test, provided that it (a) takes the decisions on the purposes and means of the processing of personal data and (b) has power to have these decisions implemented.
• Where there is no evidence that (a) and (b) lie with the place of central administration in the EU or with “another establishment of the controller in the Union”, i.e. if these take place outside the EU, there is no “main establishment” for that processing. In those cases, OSS should not apply.
• From a practical perspective, controllers bear the burden of proving which of its establishments makes the processing decisions and has the power to implement those decisions. Those claims are subject to review by the SAs, who have the ability to challenge the controller and request further information where required.
• The EDPB suggests that effective Article 30 GDPR records of processing activities (ROPA) and privacy policies can be a means of supporting a controller’s claim of main establishment.
• SAs should share their assessment and conclusion regarding a controller’s main establishment with all other concerned SAs. This enables other SAs to push back on this assessment and refer the matter to the EDPB for determination under the Article 63 GDPR consistency mechanism.
The EDPB’s position remains consistent with previous guidance, including Guidelines 8/2022 on identifying a controller or processor’s lead supervisory authority. However, there is some debate about whether its position is correct.
Article 4(16) GDPR states that:
“as regards a controller with establishments in more than one Member State, the place of its central administration in the Union, unless the decisions on the purposes and means of the processing of personal data are taken in another establishment of the controller in the Union and the latter establishment has the power to have such decisions implemented, in which case the establishment having taken such decisions is to be considered to be the main establishment”.
Despite the EDPB’s position, this does not say that a place of central administration must take decisions on purposes and means to be the “main establishment”. Instead, it provides for the central administration to be displaced as the main establishment where another EU establishment makes these decisions and has the power to implement them.
This issue was previously considered by France’s highest administration court, Conseil d’État.
[1] The Conseil d’État equated the place of the central administration to “the place of its real seat” It clarified that if a non-EU controller has “neither a central administration nor an establishment with decision-making power as to its purposes and means” in the EU, OSS does not apply, i.e. where these two cumulative negative conditions are not met.
Unlike the EDPB’s opinion, this allows for a central administration to be the main establishment even where one of these conditions is not met. This would be the case, for example, where the decisionmaking power regarding the purposes and means of processing sits outside the EU, rather than with the EU central administration. This could arise where the establishment exercises sufficient power of direction or control over other EU subsidiaries to constitute the “real seat” or central administration in the EU, even though it does not (and no other EU establishment does not) make decisions on the purposes and means of processing of personal data.
Whilst Recital 36 GDPR does say that the “main establishment” should imply the effective and real exercise of management activities “determining the main decisions as to the purposes and means of processing through stable arrangement”, Recitals are not binding. In addition, and notably, this requirement is not reflected in Article 4(16) at least regarding the “central administration” criterion, allowing for a different interpretation potentially.
The EDPB’s more restrictive interpretation of the OSS test does not serve the GDPR’s objective of providing for consistent regulation. Instead, this interpretation can result in further fragmentation as a result of multiple SAs regulating a single controller.
Overall, the opinion is broadly consistent with the EDPB’s existing guidelines.
It is clear from the opinion that identifying an organisation’s place of central management or headquarters is a good starting point. However, it is crucial for organisations to have evidence demonstrating where the decisions about your organisation’s data processing activities are taking place, and where the power for implementing those decisions lies, in order to support a claim of “main establishment”. Organisations should expect to be challenged by SAs and so must ensure that they have relevant measures in place, such as appropriate governance and decision-making frameworks, robust policies and procedures and effective GDPR accountability documents such as ROPAs, to substantiate their position.
We work extensively with clients on developing governance and decision-making frameworks to enable them to do so.
For more information and expert advice on related matters, contact a member of our Privacy & Data Security team.
Chatbots are often the first point of contact with a company that a customer has on a website when they have a query. While the imminent adoption of the EU’s AI Act has attracted most attention for the regulation of chatbots, a recent small claims tribunal decision from Canada is a cautionary reminder that other areas of law will also apply to a chatbot.
The case saw a chatbot give inaccurate information to a consumer who raised a query about an airline’s bereavement fare policy. This was despite the relevant webpage of the website correctly stating the airline’s bereavement fare policy. Relying on the chatbot’s “hallucination”, the consumer bought two full-price fares to attend their grandmother’s funeral. When the consumer submitted an application for a partial refund, the airline was directed by the tribunal to comply and provide the partial refund.
The tribunal decision found that the airline had made a negligent misrepresentation as it had not taken reasonable care to ensure its chatbot was accurate. As a result, the airline was forced to honour the partial refund. While the airline argued that it was not responsible for information provided by its agents, servants or representatives, including a chatbot, the tribunal decided that this argument did not apply in this situation.
This was due to the fact that the chatbot was not a separate legal entity and was instead deemed to be a source of information on the airline’s website.
The airline also argued that its terms and conditions excluded its liability for the chatbot but did not provide a copy of the relevant terms and conditions in the response. Therefore, the tribunal did not substantively consider the argument. In addition, while the chatbot’s response had included a link to the relevant webpage, the tribunal found that the consumer was entitled to rely on the information provided by the chatbot without double checking it against the information at the webpage.
Under Irish law, it is possible that a court would reach a similar conclusion, particularly in a consumer dispute. First, it is unlikely that a court would find that a chatbot was a separate entity from the chatbot’s operator. Therefore, it would find that the chatbot constituted information on the company’s website.
Irish law also prohibits misleading commercial practices. This includes the provision of false or misleading information that would cause an average consumer to make a transactional decision that they would not otherwise make. The provision of false information by a chatbot which results in a consumer making a purchase on the trader’s website could therefore be deemed a misleading commercial practice in an Irish court.
While the point was not fully considered in the Canadian decision, a contractual clause which excludes the liability of a company for hallucinations by its chatbot in similar circumstances may not be enforceable in Ireland. Under Irish law, contract terms which are unfair are not enforceable against a consumer. While terms which exclude a company’s liability for chatbots are not uncommon, the fairness of a term such as this, particularly where the consumer has made a purchase from the company relying on the information provided by the chatbot, would be questionable.
While chatbots are a useful tool for companies to interact with their customers, companies should be aware of the legal risks which arise through their use. While it is unlikely that this single tribunal decision from Canada will make companies liable for all chatbot hallucinations, it is a reminder that their use can lead to unexpected liability for the company operating the chatbot. The risk is more stark in a B2C setting as EU consumer law will generally not allow organisations to make consumers responsible for risks associated with poor product performance.
Companies will also have to consider their potential liability for chatbot hallucinations under the European Commission’s proposed revised Product Liability Directive. The revised Directive will enter into force in 2024 and the new rules will apply to products placed on the market 24 months after its entry into force. The revised Directive will significantly modernise the EU’s product liability regime, including by expanding the definition of a ‘product’ to include software, including standalone software, and digital manufacturing files. Under the new rules, software will be a product for the purposes of applying no-fault liability, irrespective of the mode of its supply or usage and whether it is stored on a device or accessed through a communication network, cloud technologies or supplied through a software-as-a-service model.
The revised Directive also seeks to expand the scope of liability beyond when a product was put into circulation to possibly include the time after circulation, including once the product has been placed on the market, if a manufacturer retains control of the product, for example through software updates and upgrades. Manufacturers may also be held liable for software updates and upgrades supplied by a third party, where the manufacturer authorises or consents to their supply, e.g. where a manufacturer consents to the provision by a third party of software updates or where it presents a related service (an integrated or inter-connected digital service) or component as part of its software even though it is supplied by a third party.
Organisations should also be mindful of the EU’s proposed Artificial Intelligence Liability Directive, which is closely linked to and complimented by the revised Product Liability Directive. The proposed AI Liability Directive seeks to harmonise certain aspects of national fault-based civil liability rules for damage caused by AI systems, including highrisk AI systems, as defined under the AI Act. The draft text is currently making its way through the EU’s legislative process. Once adopted, member states will have 2 years from its entry into force to transpose the legislation into their national law.
To reduce potential liability from chatbots, companies should regularly review the performance of their chatbots. In particular, the following could form part of the regular review:
1. Reviewing the output of chatbots to ensure that the information they provide aligns with the company’s advertising and sales practices
2. Promptly investigating any customer-reported issues associated with their chatbots
When the chatbot has been provided by a third party, ideally organisations should ensure that the contract with the third party affords it sufficient protection. Acceptable protection would include clearly outlining which party bears the liability for misleading/false information, and having appropriate obligations in place for the third party to make corrections to the chatbot in a timely manner.
However, chatbot providers will resist very strongly any risk sharing which means organisations need to be vigilant about managing this risk in a practical manner, including by ensuring that related services are covered under their product liability insurance. So, when deploying chatbots with consumers, even for basic apparently benign use cases, thoroughly examine the risks associated with hallucinations and incorrect responses. If those responses cannot be fixed, consider another option or put in place a robust remedy process for your customers.
For more information, please contact a member of our Technology team.
The CCPC’s new guidelines set out prescriptive labels and other forms of advertisement identification which influencers must use when posting advertisements and other forms of commercial content online.
The purpose of the new labels is to enable an influencer’s consumer audience to immediately recognise and understand that content is an advertisement or other type of commercial communication before a consumer actively engages with it.
The publication of the new guidelines reflect the CCPC’s, and ASAI’s, increased activity in the realm of influencer advertising on online platforms, It also builds on the CCPC’s findings and recommendations in its Influencer Marketing research report, which was published in November 2022.
The new guidelines apply to a broad range of individuals who can be categorised as acting as an “influencer”, ie anyone on an online platform who receives a form of commercial benefit for posting content, whether it be on a post, story, video or otherwise.
For example, types of commercial benefit may include:
• Financial payments
• Free or discounted products, experiences, trips and services
• The sharing of discount codes or affiliate links with followers for commission or other benefit
• Promotion of an influencer’s own brand or brand of a family member or friend, whether fully or partially owned
• Where the influencer is commercially connected to the brand, and
• Any other commercial aspect
The term “influencer” can also extend to content creators, streamers, bloggers and other celebrity or media personalities.
The CCPC’s new guidelines prescribe the labels which influencers must use for advertisements and commercial content. The new labels are divided into two groups: primary advertisement labels and secondary advertisement labels.
Primary Labels
• #Ad
• #Fogra for posts in the Irish language
• Labels provided for by platforms, such as the “Paid Partnership” tool by Instagram or the “Promotional Content” or “paid partnership” tools provided by TikTok
• #Gifted (or #Féirín for Irish posts) – Use only when you receive unsolicited products or services, and the brand has not directly influenced your post
For primary labels, at least one must be present in every commercial post or advertisement. Then, where appropriate, secondary labels should only be used along with 1 or more of the primary labels. The use of the secondary labels is optional and should only be used where necessary in the context of a particular commercial arrangement. Influencers should be careful to not overuse secondary labels in order to avoid confusing consumers.
It may also be helpful for influencers to declare when something is not an advertisement, provided that this is truthful. This way, consumers can also instantly recognise where something is not an advertisement or commercial content.
Brands who engage influencers to promote their products and services on online platforms should ensure to inform themselves of their legal responsibilities towards consumers. It is important that brands warn influencers promoting their products and services to adhere to the new guidelines.
Secondary Labels
• #Collaboration
• #BrandAmbassador
• #Sponsored
• #Affiliate
• #PRstay
• #PRinvite
• #PressDrop
• #OwnBrand
• #BrandInvestor
• #PreviousCommercialRelationship
• Custom labels. For example, #IWorkWith[Company]
Where influencers do not take heed of the new guidelines, this may result in complaints by consumers or consumer groups to the Advertising Standards Authority of Ireland (ASAI). If upheld, the ASAI may publish the complaints and subsequent findings. This can carry reputational and commercial implications for both the influencer and brand concerned.
As part of its research, the CCPC has directed some of its recommendations at online platforms. Online platforms should consider implementing practices and strategies to:
• Support users to appropriately label advertising content. This will help to deter the risk of hidden or misleading advertising content.
• Inform and educate influencers on the risks of misleading advertising and of not using the mandatory labels and ensure that there are clear policies in place for influencers to follow.
• Inform and educate users generally, including brands and consumers, on the risks of hidden or misleading advertising, and facilitate the reporting of hidden and misleading advertising content.
These recommendations take on additional merit when we consider that EU regulation of online platforms has significantly increased in the last few years. Under Article 25 of the EU’s Digital Services Act, for example, online platforms must ensure that their platforms are not designed, organised or operated in a way that may deceive or manipulate users, or in a way that materially distorts or impairs the ability of users to make free and informed decisions. Failure to enable users of a platform to identify an ad, or disguised ads, could arguably amount to a breach of this platform design obligation.
Are there any existing laws in place that regulate
Yes. In Ireland, influencer advertising is currently governed by a number of Irish and EU laws, as well as the ASAI’s self-regulatory code. These include:
This legislation implements the requirements of the Unfair Commercial Practices Directive into Irish law. Influencers have a responsibility to label commercial content under this Act. Failure to identify the commercial intent of an advertisement can amount to a misleading commercial practice. This typically applies to the trader, ie the brand selling a product or service to consumers. However, it can extend to persons acting on behalf of the trader. This means that influencers promoting goods or services on behalf of a trader also have obligations under this legislation.
It is a prohibited commercial practice to:
• Use editorial content in media to promote products where it is a paid promotion without making it clear to the consumer that it is a paidfor promotion, and/or
• Falsely represent or create the impression that they are a consumer when they are in fact a trader or influencer acting for a trader.
This legislation implements the requirements of the EU Omnibus Directive into Irish law which introduced new requirements for transparency in advertising. Public statements made by brands and businesses in advertising, or influencers representing those brands or businesses, will be considered by regulators when looking at whether a product or service (including digital content and digital services) meets statutory conformity requirements.
It is important for businesses to ensure that their contracts with influencers include accurate descriptions of products or services along with clear instructions on how to label promotional content and describe the products or services in public statements. Failure to do so could result in a higher threshold to be met for conformity requirements and/or enforcement by regulators for potentially misleading commercial practices.
The ASAI Code applies to all types of marketing communications, including online advertisements and paid for promotional content. Marketing communications must be designed and presented in a way that makes clear that it is a marketing communication. It also obliges businesses to make sure that any advertorial content is not misrepresented as editorial content.
Regulatory activity has already begun with the recent launch by the ASAI of a Social Media Influencer Reporting tool on its website by which the public can submit anonymous complaints about influencers and brands who do not comply with the new rules. Over 800 complaints were made using this novel tool in the final weeks of 2023. Given the new regulatory focus on influencer advertising, it is likely that 2024 will see even greater enforcement action against influencers, brands, and potentially online platforms, who fail to adhere to the new guidelines.
We recommend that organisations take measures now to ensure they equip influencers to adequately disclose their advertising content and branded content, with a particular focus on informing influencers to be aware that they must use the new labels.
The digital design and presentation of the user experience on a company’s website, previously a matter for developers and creative teams, is increasingly becoming a matter on which companies’ legal teams and external lawyers have significant input. Businesses with consumer facing platforms, and businesses which utilise influencers on consumer facing platforms for digital commerce, should review their current compliance with consumer legislation.
Greater focus should be given to consumer behaviour when engaging influencers. Businesses are encouraged to carry out ongoing compliance reviews in light of emerging legislation and market practices. In addition, when creative teams are working on strategies for the presentation of a new product or layout of a new website, the consumer user’s position and the impact of presentation and design choices on that consumer user’s understanding should be considered.
We possess market leading experience in advertising laws and the requirements of regulatory guidance and are well placed to assist companies with their compliance needs.
Contact a member of our Technology team for expert advice and guidance.
Artificial Intelligence (AI) systems are increasingly being adopted by HR practitioners. Its benefits can include increased productivity, efficiency and cost reductions. However, while case law is yet to be developed, there is potential for a broad range of claims from employees or job applicants where AI has made or influenced an unfair, biased or discriminatory decision affecting their employment.
In this article, we look at those HR practices which may lend themselves to AI deployment, namely recruitment, performance management and employee monitoring together with some of the potential risks that its deployment might pose to employers.
AI-powered software simplifies the recruitment process by scheduling interviews, screening applications and matching suitable candidates to jobs. The danger of AI, however, is its potential for bias and discrimination in a recruitment context. AI systems are trained using existing data, that can inadvertently cause pre-existing biases to be pronounced, particularly in instances where a workplace already lacks diversity. The capacity of AI to discriminate was highlighted in 2014 when Amazon had to cease using an AI recruitment tool that favoured male applicants.
It has also been reported that candidates from ethnic minorities or with certain disabilities, such as speech impediments or neurological conditions, find it more difficult to engage with AI interview software which analyses speech patterns or facial expressions.
AI is increasingly used by employers to review employees’ performance, particularly in roles where ‘real-time’ performance data is readily available. Technology is constantly evolving, and some AI systems can now forecast an employee’s future performance by assessing historical data, or predictive analytics. While AI can provide valuable insight, it should not replace human judgement and employers should tread carefully when using AI to make a decision affecting an employee’s employment.
By way of example, in 2022, three make-up artists employed in the UK by an Estée Lauder subsidiary, MAC Cosmetics, were made redundant partly based on an assessment of their performance carried out during an AI video interview. The employees, who had no prior performance issues, were asked about make-up application, which the employees argued required a practical demonstration. The AI software assessed both the content of the employees’ answers and facial expressions.
However, prior to the redundancy decision being made final, no human person reviewed the AI video. As this case was settled out of court, we are unable to deduce what decision a third party might make on the usage by AI in the management of performance or selection for redundancy.
Employee monitoring is not a new phenomenon. It is accepted that certain roles require CCTV, drugs testing, and the monitoring of e-mails. However, since the COVID-19 pandemic, studies available online suggest that approximately 85% of employers are concerned about workers’ productivity, and in particular remote workers.
As a result, some businesses have deployed AI monitoring software which can allow employers to view unsent e-mails, webcam footage, microphone input and keystrokes. This extreme level of monitoring can violate an employee’s right to privacy. Under GDPR, employee monitoring must be necessary, legitimate and proportionate. Employers must be fully transparent about:
• What AI they are using
• When they are using it
• Why they are using it, i.e. the legal basis, and
• The impact of its use on employees
Where an employer uses AI-generated data, for example performance data, to make decisions affecting an employee’s employment, the employer should act fairly and lawfully and verify that the data is accurate. Failure to comply with data protection legislation could lead to reputational damage and significant fines being imposed on employers.
In 2021, for example, a €2.6 million fine was imposed by the Italian data protection authority on Foodinho, an Italian app-based food delivery company, for breaches of GDPR related to its use of an algorithm to allocate work to delivery riders based on an AI-generated scoring of their reputations.
• Be transparent: Consider implementing an employment policy governing the use of AI. Ensure AI systems are compliant with existing workplace policies, in particular data protection policies and privacy notices.
• Assess Risk: Carry out a risk assessment and a data protection impact assessment for each AI system.
• Retain a human element: Ensure managers make final decisions regarding employees’ employment.
• Training: Deliver AI training to managers and HR staff and ensure it covers accuracy and bias.
The use of AI is due to become regulated in the coming months as the EU’s new AI Act was endorsed by member states on 2 February 2024. Once the Act comes into effect, developers of AI technologies will be subject to stringent risk management, data governance, transparency and human oversight obligations.
It is worth noting in this regard that the Act classifies AI in the category “employment, work management and access to self-employment” as a high-risk AI system. Accordingly, this will need to be assessed before going on the market and throughout their lifecycle, though many systems are currently on the market and in use.
For more information and expert advice regarding the deployment of AI in the workplace, contact a member of our Employment Law & Benefits or Artificial Intelligence teams.
Harnessing and understanding data is critical to success in most industries. This is a result of the world becoming increasingly digital. In the present day, the world at large accesses the internet daily for a variety of reasons and as a result, data is now a high value commodity requiring organisations to have robust cybersecurity measures in place.
In an era of rapid technological and legal advancements, ensuring the security of that data is paramount, particularly as cybercrime continues to rise in scale and complexity. Cybercriminals understand the value associated with data and continue to explore new methods to monetise the exploitation of data. In addition to monetary consequences, the loss or unauthorised disclosure of data can have additional serious operational and reputational implications for organisations, not to mention legal consequences such as regulatory enforcement. The last year has demonstrated that businesses must have robust and appropriate security systems in place to adequately protect themselves from falling victim to a cyber incident / data breach.
The potential pitfalls of sub-optimal data security were laid bare in 2023 as a result of several highprofile incidents which attracted significant media and regulatory attention.
The Data Protection Commission investigated a ransomware attack on Centric Health, and the MOVEit data breach impacted an estimated 2,000 organisations. We consider the lessons which can be gleaned from such incidents along with the implications of key European and Irish court decisions from 2023 which may increase the potential for those affected by data breaches to claim damages.
2023 also saw the progress of several significant pieces of legislation which will affect how businesses approach data security on a go forward basis, chief among them the Digital Markets Act and the Digital Operational Resilience Act. We cast an eye to the future and anticipate the implications of changes to the legal landscape for businesses that use data extensively. Enhanced legislative oversight as well as the proliferation of technologies such as generative AI are likely to be key themes in data security going forward.
This annual review aims to consider key developments in the cyber/data security space in 2023 and distil the most important lessons for businesses in the coming year. We hope you enjoy the first edition of this annual review.
Mason Hayes & Curran is pleased to announce the appointment of partner Philip Nolan as its new Chair.
Philip’s experience of building high performing teams and advising leading international companies will be integral in his new role. Philip will continue advising clients as Head of the Technology Law team.
This strategic move underscores the importance for legal firms to have leadership that understands the intersection of law, talent and technology. /Chair
We are a business law firm with 117 partners and offices in Dublin, London, New York and San Francisco.
Our legal services are grounded in deep expertise and informed by practical experience. We tailor our advice to our clients’ business and strategic objectives, giving them clear recommendations. This allows clients to make good, informed decisions and to anticipate and successfully navigate even the most complex matters. Our working style is versatile and collaborative, creating a shared perspective with clients so that legal solutions are developed together. Our service is award-winning and innovative. This approach is how we make a valuable and practical contribution to each client’s objectives.
What others say about us
Our Technology Team
Chambers & Partners, 2023 “…always go over and above, no matter the issue. They have a wonderful ability to turn advice on complex points around quickly and concisely.”
Our Technology Team
“They remain the “go to” firm for privacy matters.”
Legal 500, 2023 “Vast experience in dealing with technology companies headquartered in Ireland.”