IRMS Bulletin Issue 248 November 2025

Page 1


Information Governance and Caldicott Guardian CPD Training

10% DISCOUNT FOR IRMS MEMBERS

Quote HCUK10IRMS when booking

Update your knowledge and skills in information governance and management including the role of the Caldicott Guardian, to ensure effective handling and sharing of patient information.

For a full event listing visit: www healthcareconferencesuk co uk/conferenc es-masterclasses/subjects/informationgovernance

TKM_advert_72mm x 108mm.qxp_Layout 1 31/01/2020 09:49

Data Protection

Training:

Certificated Courses and Qualifications

Tkm offers data protection courses approved and certificated by the SQA and BCS

Coming soon: Auditing Data Protection Compliance Courses are scheduled across the UK and can be arranged in house.

See website for dates 10% discount on individual course fees for IRMS members. Fees start from £350 for a one day course (excluding VAT)

For further information and booking please contact us

t. 01599 511277 or 07833 617462

e. liz@tkmconsulting.co.uk

in this issue...

Bulletin Editor

Catherine Burton, G and C Media catherine@gandcmedia.co.uk

Production Editors

Visual Print & Design enquiries@visualprint.co.uk

Chris Callander, G and C Media chris@gandcmedia.co.uk

Publisher

Information and Records Management Society® St James House, Vicar Lane, Sheffield, S1 2EX Tel: 01625 664520 www.irms.org.uk

IRMS Executive Committee

Jaana Pinnick AMIRMS, FIRMS, Chair jaana.pinnick@irms.org.uk

Reynold Leming AMIRMS, FIRMS, Vice Chair · (External) reynold.leming@irms.org.uk

David Reeve AMIRMS,FIRMS, Vice-Chair · (Internal) david.reeve@irms.org.uk

Simon Ellis AMIRMS, FIRMS, Commercial Development Director simon.ellis@irms.org.uk

Jim Pittendrigh, Treasurer jim.pittendrigh@irms.org.uk

Nathan Bent, Secretary nathan.bent@irms.org.uk

Paulina Jedwabska, Conference Director paulina.jedwabska@irms.org.uk

Rob Bath, Digital Director rob.bath@irms.org.uk

Suzy Taylor AMIRMS, FIRMS Groups Director suzy.taylor@irms.org.uk

Jenny Obee, FIRMS, Membership Director jenny.obee@irms.org.uk

Roger Poole FIRMS, Professional Standards Director roger.poole@irms.org.uk

May Ladd, Training Director may.ladd@irms.org.uk

Marketing & Communications Director · (vacant) marketing.communications@irms.org.uk

IRMS Officers and Sub-committee Chairs

Maria Lim, Technology Partnerships Officer maria.lim@irms.org.uk

Rebekah Taylor, Finance Officer rebekah.taylor@irms.org.uk

Kamil Soree, Data & Digital Officer kamil.soree@irms.org.uk

Sian Astill, Data & Digital Officer sian.astill@irms.org.uk

David Reeve AMIRMS,FIRMS, Awards Sub-committee Chair david.reeve@irms.org.uk

Jane Proffitt, Accreditation Sub-committee Chair accreditation@irms.org.uk

Published bi-monthly in January, March, May, July, September, November. ISSN 2045-6581 Information and Records Management Bulletin Issue 248 · November 2025

Lauren Cook, Groups Officer groupsofficer@irms.org.uk

Content Officer · (vacant) contentofficer@irms.org.uk

Data Protection & Digital Officer · (vacant) dataprotection-digitalofficer@irms.org.uk

Carys Hardy, Marketing & Communications Officer carys.hardy@irms.org.uk

Jenny Moran, Marketing & Communications Officer jenny.moran@irms.org.uk

Jonathan Nott, General Manager jonathan.nott@irms.org.uk

IRMS Conference

Emma Turley, IRMS Delegate Enquiries emma@revolution-events.com

Deborah Ward Johnstone, IRMS Sponsorship Enquiries deborah@revolution-events.com

IRMS Group contacts

Ireland · Jenny Lynn jenny.lynn@irms.org.uk

Public Sector · Elizabeth Barber AMIRMS, FIRMS elizabeth.barber@irms.org.uk

Higher Education & Further Education · Anne Grzybowski anne.grzybowski@irms.org.uk

IM Tech · Maria Lim maria.lim@irms.org.uk

Information Rights · Craig Clark craig.clark@irms.org.uk

Legal · Iram Ditta iram.ditta@irms.org.uk

London · May Ladd may.ladd@irms.org.uk

Midlands · Mark Smith mark.smith@irms.org.uk

North · Georgina Lee georgina.lee@irms.org.uk

Wales/Cymru · Sarah Phillips sarah.phillips@irms.org.uk

South West · Lauren Cook lauren.cook@irms.org.uk

Schools · Lyn Rouse lyn.rouse@irms.org.uk

Scotland · Khopolo Jamangile khopolo.jamangile@irms.org.uk

Property · Beverley Cunningham beverley.cunningham@irms.org.uk Financial Services · (vacant) financial@irms.org

How to join the IRMS Go to www.irms.org.uk/join

How to contribute

If you’d like to contribute to the Bulletin please send copy to the Editor at: catherine@gandcmedia.co.uk. The up and coming deadlines to submit copy are: 15 November for the January issue and 15 January for the March issue.

How to advertise

Deborah Ward Johnstone, IRMS Bulletin Advertising Enquiries Deborah@revolution-events.com

The Bulletin provides a wide spectrum of opinion on information and records management matters: the views of the contributors do not necessarily reflect the views of the Information and Records Management Society®

from the Chair

Dear readers,

Following the summer holidays, it is now officially autumn and the flu season! I have already had my jab, which was promptly followed by a mild cold. Some colleagues have unfortunately caught various viruses, from COVID to various strands of flu – I hope you all recover soon!

IRMS exists to promote the professional development of anyone involved with information management. We do this by encouraging our members to network and share ideas, attend educational and topical events, and access content that we have built over the years that informs and guides. In addition, we back this with an accreditation process, mentoring and discounted training through a network of approved training providers.

Technology moves on increasingly fast, and our current online platform has become dated. It is a black box solution and has caused some ongoing issues being reported by our members, which then take a lot of effort and time to resolve by our team of volunteers.

To better support our well-established community going forward, we have spent time looking at the market and comparing products to better serve our community and showcase the IRMS to the outside world. As a result, we have selected and invested in new digital

infrastructure platforms to better serve our members and website visitors, enabling us to continually improve our user experiences.

Our new member management system from MembershipWorks links into the most common website platforms, allowing us to improve the user experience for all our website visitors.

The benefits include:

• All physical servers holding member data and backup data are based in the UK.

• The subscription experience is greatly simplified and integrated with Stripe, a leading payment system. This allows us to offer a wider variety of self-service subscription payment options, catering for the different needs of individual and corporate members with automated renewal and generation of invoices, eliminating the complexities and timeconsuming manual workarounds of recent years.

• A private member directory, which is easy to customise and offers a powerful search engine with smart keywords. This allows members to better foster connections and add value to their membership, and it also provides members with more direct control over updating their profiles and maintaining privacy settings.

For the website we selected WordPress. It is cost-effective, scalable and supported through an extensive user community. It also has many third-party integrations (including MembershipWorks) and simplifies our ability for support through IRMS volunteer resources.

The new website contains a built-in standard for web content accessibility guidelines (WCGA). This means it complies with the WCGA structured by the four POUR principles: perceivable, operable, understandable and robust, ensuring content is accessible to all users, including those with disabilities.

As I am writing this, our team of volunteers are working hard behind the scenes to migrate valuable IRMS content over from the old website and to set up the new platform, targeting completion of testing by early November. All being well, we will then launch the new platforms in the later part of November. Keep a look out for member comms inviting you to have a pre-launch preview!

I take this opportunity to thank our digital team, especially Simon Ellis, Jonathan Notts, Sian Astill and Kamil Safai, as well as everyone who has helped with testing during development, for the many hours that you have so generously given up for this work!

The IRMS conference team and Revolution Events met at the Celtic Manor in mid-August to look at the venue and imagine how the logistics will work next May. I encourage you to take advantage of our early bird rates and book your place before December. We’ve assembled exceptional keynote speakers, and the team

is working hard behind the scenes to deliver another unforgettable event. If you need help making the business case or securing budget, please get in touch.

The call for nominations for the IRMS Industry Awards 2026 is out soon at the time of writing this, and the Awards sub-committee members are awaiting your suggestions in all the award categories. We will also be recruiting new members to join the Executive Committee in the roles of Membership Director and Marketing Director, so if you are interested in either of these posts, please submit your application or reach out to us for further information. Many thanks to Jenny Obee, who has had to step down from the Membership Director role due to other pressures on her time.

Finally, I am excited to flag up our newly established collaboration with AIIM, led by Ren Leming and May Ladd (IRMS London). This strategic partnership combines AIIM’s technology-focused expertise with IRMS governance and records management best practices to create comprehensive professional development opportunities. We look forward to exploring future collaboration opportunities that will deliver ongoing value to their respective members.

I wish you all a great (but not spooky!) autumn, until the next issue.

Do you know what’s next for your professional development?

Have you considered IRMS accreditation yet?

If not, what’s stopping you?

Are you:

• A member of the IRMS

• Someone that works with the management of information in any way?

• Possessing 5+ years experience in the profession or 3+ years with a relevant qualification?

• Someone that can demonstrate an understanding and practical application of the principles and practice of managing and governing information and records?

Then why not apply?

- You’ll be assessed by written or verbal assessment (your choice)

- We can offer you guidance, support and an application buddy

- Regardless of outcome, you’ll have a professional development plan we’ll help you with

irms.org.uk/accreditation

NOVEMBER 2025

irmsnews

New IRMS website

We are delighted to announce a plan for a new website and membership management system for IRMS.

Unlike any of us, our old platform had become dated and inefficient, so we took the decision to upgrade to a new system, and we plan to go live in late November.

Some of the benefits are

• Better data security with UK-based servers – a subject close to our hearts!

• More secure and flexible membership payments, with the option to renew your subscription automatically.

• Easier management of your profile and control over your privacy settings.

A private member directory, which is easy to customise, offers a powerful search engine with smart keywords, and makes it easier to connect with others in the community.

Our dedicated staff and volunteers are migrating and updating content, so please look out for more information and opportunities for a pre-launch preview. No plan survives contact with the enemy, so please bear with us: we will be relying on you to give us feedback to ensure the new website works for everyone.

New website FAQs

Why is IRMS doing this? IRMS exists to promote the professional

development of anyone involved with information management. Therefore, our communication platforms must be up to the job of bringing people together and making it easy to access information about everything that IRMS has to offer.

What was wrong with the old system?

Technology moves increasingly fast, and our current platform had become dated. This caused a few issues, including payment glitches, which were annoying to members and took a lot of time and effort to resolve.

We also want to attract new members and felt that our current ‘shop window’ was not visually attractive and did not showcase the many activities, events and member benefits on offer.

Most importantly, the old site did not meet modern accessibility guidance.

Which platform has been chosen?

We have selected WordPress because it is cost effective, scalable and supported through an extensive community. It also offers many third-party integrations (including MembershipWorks) and simplifies our ability for support though IRMS volunteer resources.

How will members benefit?

Our new member management system from MembershipWorks links into the most common website platforms, allowing us to improve the user experience for all our website visitors.

Other benefits include

• All servers holding member data and back-up data are based in the UK.

• Member subscriptions are much simpler and integrated with Stripe, a leading payment system. This means we can offer a variety of subscription payment options, catering for different needs of individual and corporate members and with automated renewal.

• Easier management of your profile and control over your privacy settings.

• A private member directory, which is easy to customise, offers a powerful search engine with smart keywords, making it easier to connect with others, manage your profile and control your privacy settings.

Have you thought about accessibility?

Yes. We have tried to keep accessibility at the forefront when planning the new platforms. This includes WCAG compliance, which means a website meets web content accessibility guidelines, structured by the four POUR principles: perceivable, operable, understandable, and robust. This ensures content is accessible to all users, including those with disabilities.

If the website does not meet your needs or could be improved, we want to hear from you.

When will it go live?

We are currently migrating and updating content and hope to go live in late November once checking and testing has been completed.

We will be holding events for members to give feedback and familiarise themselves with the new layout.

“The route to senior decision-makers in the Information Management Industry - in public and commercial sectors.”

4,000 information managament professionals read each issue of the bulletin – They could be reading your message too.

For more details and to advertise in the bulletin contact Deborah Ward-Johnstone:

01892 820 936

Twelve steps for organisations to reduce their digital carbon footprint

Digitalisation has brought immense benefits to organisations, but it also carries an invisible environmental cost. As data volumes grow, so too does the energy needed to create, store, process, and manage them, driving up both carbon emissions and operational costs.

By implementing effective records management practices, organisations can reduce data clutter, streamline recordkeeping, and eliminate unnecessary storage demands, lowering overall energy use and improving sustainability.

By implementing effective records management practices, organisations can reduce data clutter, streamline recordkeeping, and eliminate unnecessary storage demands, lowering overall energy use and improving sustainability. Crucially, these efforts also deliver financial benefits through reduced energy consumption, optimised data infrastructure, and lower storage costs. Informed by the 2024 OASIS White Paper, “The Impact of Digital Decarbonisation in Records and Information Management”, this article outlines 12 practical steps organisations can take to reduce their digital carbon footprint, cut costs, and operate more sustainably.

STEP #1: DEVELOP AND IMPLEMENT A ROBUST DIGITAL RECORDS LIFE-CYCLE MANAGEMENT STRATEGY

This involves defining the stages of a record’s life from creation, active use, and maintenance, to eventual archiving and especially disposal.

By clearly outlining these stages, organisations can ensure that records are properly managed and stored according to their relevance and value. This approach helps in preventing the unnecessary accumulation of records and ensures that only important, active records occupy valuable storage space. Data disposal (or data destruction or data sanitisation) is often a forgotten step in data life-cycle management but should be a critical consideration, though destruction standards should be adhered to as and where appropriate.

Practical actions:

• Define stages in the data life cycle: create, use, archive, delete.

• Embed these stages into your information governance and compliance policies.

• Apply destruction policies consistently, ensuring data is only retained for as long as it’s needed, particularly for low-value legacy or duplicated files.

• Use metadata to tag records at creation, making future disposal more automated and auditable.

STEP #2: CONDUCT REGULAR AUDITS OF DIGITAL RECORDS AND MAINTAIN AN EFFICIENT RECORDS MANAGEMENT SYSTEM

Conducting regular audits of digital records is essential to maintaining an efficient records management system. These audits help identify records that are no longer needed, outdated, or irrelevant. Once identified, these records should be systematically disposed of or archived according to established policies. This practice not only reduces data clutter but also helps in freeing up storage resources and reducing energy consumption associated with maintaining large volumes of unnecessary records.

Practical actions:

• Schedule quarterly or biannual audits of your file systems and data repositories.

• Use the Data Entity Audit Framework (see Step 12) to assess data completeness and timeliness.

• Work with departmental leads to evaluate whether data is still needed or can be archived, compressed, or deleted.

• Automate reports on data usage patterns to help identify dark or unused datasets.

• Introduce digital records management systems which can connect existing data silos for better visibility of the full data estate.

STEP #3: UTILISE EFFICIENT STORAGE SOLUTIONS

Implementing efficient storage solutions is another aspect of effective records management. This includes using centralised storage systems that allow for easy retrieval and management of records, as well as employing tiered storage strategies that allocate records to the most appropriate storage media based on their usage frequency and importance. For example, frequently accessed records can be stored on high-performance storage systems, while less critical records can be moved to lower-cost, energy-efficient storage. The development of a data quality plan can help to ensure that (i) high-quality data is prioritised and available as appropriate, while (ii) poor-quality data that can occur at any stage of interaction is stored as efficiently as possible or removed from storage.

Practical actions:

• Implement tiered storage: frequently accessed data → highperformance storage infrequently accessed data → low-energy, cost-efficient storage (eg, cold/cloud storage).

• Centralise data repositories to reduce duplication across silos.

• Work with IT teams to ensure old and redundant backups are reviewed and, where appropriate, retired.

Optimise data use and storage through a robust information governance framework to promote sustainable information practices. A framework

should involve the implementation of policies and procedures that guide how data are collected, stored, used, and disposed of, with a focus on minimising environmental impact.

STEP #4: COLLECT THE MINIMUM VIABLE DATA FOR A SPECIFIC PURPOSE AND RETAIN IT ONLY AS LONG AS NEEDED

Data minimisation is the practice of collecting only the data necessary for a specific purpose and retaining it only as long as needed. This approach reduces the risk of accumulating dark data and minimises storage costs. For example, in the context of sensor data, it is crucial to ensure that only relevant data are captured at the correct level of granularity. Collecting excessive data, such as recording sensor data too frequently when no significant changes occur, leads to unnecessary storage and processing, ultimately contributing to inefficiencies. Reducing the carbon footprint through good-practice principles for data ethics includes avoiding the proliferation of

Data minimisation is the practice of collecting only the data necessary for a specific purpose and retaining it only as long as needed. This approach reduces the risk of accumulating dark data and minimises storage costs.

unnecessary, redundant, or overlapping data infrastructure.

Practical actions:

• Only collect data that is necessary for a clear and agreed purpose.

• Align with privacy-by-design principles and ethical artificial intelligence (AI) practices.

• Avoid high-frequency sensor logging, unless it adds analytical value.

• Review forms, logs, and systems to reduce unnecessary fields or automatically generated data.

STEP #5: IMPLEMENT A CENTRALISED DATA MANAGEMENT SYSTEM(S) TO CONSOLIDATE DATA STORAGE AND MANAGEMENT

Implementing centralised data management systems allows organisations to consolidate their data storage and management efforts into a single, cohesive system. This centralisation ensures that data are consistently categorised, audited, and maintained across the organisation, reducing the risk of redundant storage. Centralised systems also facilitate better access control, data sharing, and compliance with regulations, leading to more efficient and effective use of data assets. Better data management can help to address the impact of digital technologies since their underlying infrastructure, especially increasing data generation and use, can negatively affect the environment.

Practical actions:

• Consolidate data systems wherever possible.

• Appoint data stewards or owners for each system to maintain standards.

• Ensure metadata standards are harmonised across systems.

• Audit data flows to remove redundant exports or integrations.

• Customise existing in-house technology for records management (such as through an enterprise-grade add-on to SharePoint) rather than commissioning new standalone platforms.

STEP #6: UTILISE CLOUD SERVICES THAT HAVE GREEN CERTIFICATIONS TO REDUCE THE ENVIRONMENTAL IMPACT OF DATA STORAGE AND PROCESSING

Utilising cloud services that have green certifications can significantly reduce the environmental impact of data storage and processing. Green-certified cloud providers use energy-efficient data centres powered by renewable energy sources, which helps in lowering the overall carbon footprint associated with data management. Additionally, cloud services often offer advanced data management features, such as automated data archiving and deletion. By choosing cloud providers with a commitment to sustainability, organisations can ensure that their data management practices align with broader environmental goals and reduce their financial costs of storing data. Some data centre operators are, for instance, signatories of the Climate Neutral Data Centre Pact and agree to

make data centres climate neutral by 2030.

Practical actions:

• Use providers who are signatories to the Climate Neutral Data Centre Pact.

• Check sustainability credentials (eg, Google’s Carbon-Free Energy, Microsoft’s Emissions Impact Dashboard).

• Optimise cloud storage with life-cycle rules (eg, automatic archiving, deletion of inactive objects).

• Avoid unnecessarily replicating cloud-stored data across regions or environments.

Know your data is key to a clear understanding of data assets. The absence of this understanding can lead to the accumulation of dark data, or data that are stored but not actively used, which not only wastes storage resources but also hinders an organisation’s ability to extract valuable insights from its data.

STEP #7: AVOID UNNECESSARY DUPLICATION OF DATA AND REDUNDANT DATA CREATION

Implement strong data governance measures to prevent unnecessary duplication of data, which can lead to increased storage and energy costs.

This excess data consumes valuable storage space and increases energy use, particularly in data centres and when stored inefficiently. When organisations systematically identify and eliminate redundant data, they reduce the complexity of their digital systems, which can enhance the overall efficiency of IT operations. Silo working and, indeed, data silos can result in time spent on needlessly creating more data when it may already exist; enhancing corporate memory capabilities to better know your data is important to increase productivity (less time searching) and reduce the data carbon footprint.

Practical actions:

• Use deduplication tools within enterprise systems.

• Standardise naming and classification protocols to reduce accidental duplication.

• Consolidate overlapping datasets and establish a single source of truth.

• Address siloed working habits through better collaboration tooling.

• Use tools that help you avoid duplication and silos, by allowing all teams to access the same digital storage location but with different, role-based permissions.

STEP #8: REGULAR REVIEW AND EVALUATION OF DARK DATA, SO IT IS EITHER EFFECTIVELY UTILISED OR RESPONSIBLY DISCARDED

Regularly review and manage dark data, ensuring it is either effectively utilised or responsibly discarded. Unused data contributes significantly to energy consumption and, consequently, to the overall carbon footprint. Effective management of dark data requires not only classifying and verifying its business value but also maintaining this classification to prevent data from becoming obsolete or ‘dark’ again. This not only carries benefits for better managing data energy consumption but also for organisational decision-making; if some data remains unseen and decisions are only taken on the basis of visible data, then parts of the wider picture are missing, resulting in potentially mistaken conclusions and poor decisions.

Practical actions:

STEP #9: DEVELOP ROBUST SYSTEMS FOR KNOWLEDGE MANAGEMENT THAT PROMOTE THE EFFICIENT SHARING AND REUSE OF INFORMATION

Set policies to regularly review storage buckets, logs, and backup archives.

Through the knowledge practices of many organisations, an increasing volume of digital data is being generated, processed, and stored, but it is often never reused, creating a huge and unnecessary demand on energy consumption. Ensuring seamless data sharing across different parts of the organisation requires clear data standards and metadata practices to maintain a coherent context of shared data, data access while maintaining security, verification of the accuracy of the data, and a common interpretation to facilitate collaboration. Doing so can help to ensure a clearer picture of both an organisation’s IT landscape and its datascape in support of sustainability by reducing software and data waste.

Practical actions:

• Identify dark data via usage tracking, log file analysis, and heat mapping.

• Classify and tag datasets with clear metadata to ensure future usability.

• Set policies to regularly review storage buckets, logs, and backup archives.

• Use AI and natural language processing tools to extract meaning from unstructured data before disposal.

• Consider using specialist systems to extract and tag important data from critical documents like contracts, rather than digitally storing the whole file.

• Build organisational memory through accessible, indexed knowledge repositories.

• Invest in metadata and tagging protocols to ensure discoverability.

• Where possible, work with suppliers to extract valuable data from records and append to your existing platforms for better knowledge sharing.

• Encourage reuse of presentations, models, reports, and datasets before creating new versions.

• Promote a “don’t reinvent the wheel” culture through peer sharing and learning platforms.

Be environmentally aware and contribute to the achievement of global sustainability goals, particularly those related to climate action, responsible consumption and production, and sustainable industry innovation. The systematic reduction of digital carbon emissions not only aids in meeting regulatory requirements but also positions organisations as leaders in the global transition towards a low-carbon economy.

STEP #10: EDUCATE EMPLOYEES ON THE IMPORTANCE OF RESPONSIBLE DATA STEWARDSHIP FOR THE ENVIRONMENT

Digital training programs should educate employees on the importance of metadata management, the risks of dark data, and the organisation’s environmental goals. Training should also cover best practices for data classification, storage, and disposal, ensuring that employees understand their role in maintaining data quality and reducing the organisation’s carbon footprint. By fostering a culture of sustainability and responsibility, organisations can ensure that their information governance practices are consistently applied and effective. By taking an holistic approach to responsible data stewardship, the education of employees should go beyond the important matters of privacy and security concerns to include the ecological impact of their digital practices.

Practical actions:

• Run workshops on how digital choices (eg, large attachments, redundant copies, unarchived inboxes) contribute to carbon emissions.

• Integrate environmental impact into digital literacy or compliance training.

• Encourage inbox hygiene, shared-drive spring cleaning, and deletion of obsolete files.

STEP #11: PROMOTE RESPONSIBLE AI ENGAGEMENT AND USE

When implementing AI, carefully consider the dataset’s quantity and location, and assess the necessity of using advanced technologies like generative AI to avoid unnecessary energy consumption. While the allure of using the latest AI technology on our datasets may be tempting, it is essential to scrutinize how such advancements genuinely enhance our decision-making processes. Tools like the data carbon scorecard can be used to evaluate whether methods from descriptive, predictive, or prescriptive analytics are more appropriate to meet project needs, rather than AI, and are key aids to promote behavioural change and the environmental efficiency of projects.

Practical actions:

• Only use AI when justified by added value.

• Apply the data carbon scorecard to assess if descriptive or predictive analytics suffice.

• If using AI, evaluate the carbon intensity of training and inference stages.

• Store model outputs and logs efficiently, and avoid saving all intermediate states by default.

STEP #12: APPLY THE DATA ENTITY AUDIT FRAMEWORK TO ASSESS ‘DATA FITNESS’

Without clear assessment tools, organisations struggle to distinguish valuable data from waste. Using the concepts of data completeness and timeliness, the Data Entity Audit Framework enables organisations to audit and review their major data entities and attributes across their information landscape, identifying areas where improvements can be made to reduce greenhouse gas emissions and reduce their digital costs. Assessing the ‘fitness’ of data entities is an ongoing activity and assists organisations in identifying data that can be removed.

Practical actions:

• Use the two key criteria, completeness (does it have what’s needed?) and timeliness (is it recorded at the right frequency?), to review data assets.

• Archive or delete datasets that fail on both dimensions.

• Tag compliant datasets with metadata to prevent them becoming dark over time.

• Integrate this framework into regular IT audits or business-as-usual processes.

Reducing digital carbon is no longer a niche concern. With data infrastructure now consuming more energy than aviation, addressing digital emissions is essential for meeting environmental, social, and governance goals; controlling costs; and enhancing resilience.

By embedding these 12 steps, backed by evidence, tools, and structured thinking, organisations can not only reduce their environmental footprint, but unlock operational efficiency, better decision-making, and a leadership position in digital sustainability.

The Authors

Professors Tom Jackson and Ian Hodgkinson of Loughborough University are experts in information management and digital transformation. They have published pioneering work on the dark side of digital decarbonisation, written about digital transformation for leading academic journals, created the first publicly available data carbon forecasting toolkit, and fed into global AI policy development as members of the OECD.AI Compute and Climate Expert Group.

<insights@oasisgroup.com>

<https://www.oasisgroup.com/cuttingcarbon-emissions-through-reimaginedinformation-management/?utm_ source=irms&utm_medium=email&utm_ campaign=digital_decarbonisation&utm_ id=digital+decarbonisation>

An overview of Microsoft 365 Copilot

The use of unstructured data as an input to generative artificial intelligence (AI) processes

Microsoft 365 (M365) Copilot is an example of a ‘retrieval augmented generation’ (RAG) system. RAG systems are a way for organisations to use the generative AI capabilities of external large language models in conjunction with their own internal information. M365 Copilot enables an individual end user (once they have a licence) to use the capabilities of one of OpenAI’s large language models in conjunction with the content they have access to within their M365 tenant.

The basic idea of a RAG system is that an end-user prompt is ‘augmented’ with relevant information from one or more sources (typically from a database or other information system within an organisation’s control). The augmented prompt is then sent to the large language model for an answer. A rigid separation is maintained between the large language model and the organisational information systems used to augment the prompt.

RAG systems aim to harness the advantages of large language models (their creative power, ability to manipulate language, and vast knowledge) whilst mitigating their main disadvantages (their tendency to hallucinate, their ignorance of their own sources, and the likelihood they have not been allowed to learn anything new since they finished their training).

Many RAG systems will use a structured database as their source for augmenting the large language model. Such a structured database will act as a ‘single source of truth’ within the organisation on the matters within its scope.

The interesting thing about M365 Copilot is that it is using the unstructured data in an organisation’s document management, collaboration, and email environments (SharePoint, Teams, and Exchange) as the source of content to augment each enduser prompt.

Such a structured database will act as a ‘single source of truth’ within the organisation on the matters within its scope.

These environments do not function as a single source of truth inside an organisation. They will contain information in some items (documents/messages etc) that is contradicted or superseded by information in other items.

Using unstructured data as the source for a RAG system poses two key challenges for the system provider:

• the system must respect access permissions within those unstructured data repositories; and

• the system must be provided with a way to determine what content is most relevant (and what is less relevant, or irrelevant) to any particular prompt.

M365 Copilot is not seeking to change access permissions or retention rules. In fact, those access permissions and retention rules can be seen as guardrails on the operation of M365 Copilot.

THE COMPONENT PARTS OF M365 COPILOT

M365 Copilot itself has a relatively small footprint within an M365 tenancy. It adds just three types of thing to your M365 tenant:

• Some new elements to the user interface of the different workloads and applications (Teams, Outlook, PowerPoint, Word, etc) to enable individuals to engage with M365 Copilot and enter prompts for Copilot to answer.

• A set of semantic indexes that Copilot can search when an end user enters a prompt. When an organisation starts using M365 Copilot, a tenant-level semantic index is built that indexes the content of all the SharePoint sites in the tenant. Every time they purchase a licence for a new individual user, a new user-level semantic index is built of all the content accessible to that user in their own individual accounts, together with any documents that have been specifically shared with the individual.

• Tools (such as the Copilot dashboard and the Copilot usage report) that enable administrators to monitor how Copilot is being used.

The reason that Microsoft has needed to add so little to the tenant is that M365 Copilot works with three pre-existing components:

• The Microsoft Graph, which is a pre-existing architectural feature of every M365 tenant.

• In essence, it is a searchable index of all the content held within the tenant and all the interactions that have taken place within the tenant. Copilot’s new semantic indexes are built on top of the existing Microsoft Graph.

• A group of large language models (of which the most important, at the time of writing, is GPT-4) that sit outside of any M365 tenancy. They are owned and built by OpenAI, but they sit on the Microsoft Azure cloud platform.

• An API layer to govern and facilitate the exchange of information between your tenancy and the external large language model. M365 Copilot uses Azure OpenAI Service to fulfil this role. The purpose of the Azure OpenAI Service component (and the contractual terms that underpin it) is to prevent the use of the information provided to the large language model being used by OpenAI to retrain the model or train new models. This is important to maintain the separation between the information in your tenant and the large language models that OpenAI and others make available to consumers and businesses.

This Azure OpenAI service is an outcome of the partnership between Microsoft and OpenAI. OpenAI is an AI research lab that has developed the GPT series of large language models. They offer those models in consumer services (like Chat GPT) and to businesses. Microsoft hosts the GPT models on its Azure cloud platform and is a major provider of funding to OpenAI. The illustration shows what happens when an individual enters a prompt into M365 Copilot chat:

• The prompt is sent to both the user-level semantic index (of content specific to that individual) and the tenant-wide semantic index (of all SharePoint content);

• The semantic indexes develop a targeted search of Microsoft Graph to find information relevant to the individual’s prompt;

• Copilot sends the prompt, enriched with the information found in Microsoft Graph, to Azure Open AI services;

• Azure Open AI Services opens a session with one of OpenAI’s large language models (typically GPT-4, which also serves as one of the models behind OpenAI’s consumer Chat GPT service);

• GPT-4 provides a response;

• Azure Open AI Services closes the session with GPT-4 without letting OpenAI keep the information that was exchanged during the session; and

• Azure Open AI Services provides a response back to M365 Copilot, which does some post-processing and provides it to the end user.

The interesting thing about M365 Copilot is that it is using the unstructured data in an organisation’s document management, collaboration, and email environments (SharePoint, Teams, and Exchange) as the source of content to augment each end-user prompt.

The Author

Dr James Lappin has worked in archives and records management for thirty years. In 2024 he was awarded a PhD for his thesis ‘The science of recordkeeping systems’. He is Senior policy lead in knowledge and information management in the UK Government Digital Service, and recently worked on the GDS AI Insights Guide ‘Using AI to manage the digital heap’. He is the author of the Thinking records blog. All views in this article are his own.

<www.thinkingrecords.co.uk>

Approaches for applying retention labels in SharePoint sites and Microsoft Teams

As many of you will know, I’ve been pretty exclusively focussing on retention in Microsoft 365 for the past decade. During this time, I’ve helped dozens of organisations radically improve the way that they manage their digital records, by introducing architectures that centre on information management life cycles.

However, while many organisations have now taken the plunge into managing records in SharePoint and Microsoft Teams, the vast majority still haven’t taken advantage of the powerful retention capabilities that are included within their Microsoft 365 licences.

This paper outlines all of the different approaches that Microsoft provides, which allow you to apply retention labels to your content across SharePoint Sites and Microsoft Teams. My objective here is to help inform your decision around which of the various approaches might be most suitable for you to use when planning your organisation’s adoption of retention labels.

There are seven approaches that can be used to apply retention labels to your content in Microsoft 365:

Manual application

• Custom logic/processes

• Trainable classifiers

• Sensitive information types

• Exact data match

• Keyword query

• Content processing (SharePoint Premium)

• Default labels

Each of these approaches are described in more detail in the sections below:

MANUAL APPLICATION OF RETENTION LABELS

Manually tagging content is the most straightforward (yet possibly least successful) method of applying retention to your content in Microsoft 365. With this approach, you publish your retention labels so that they are available

for staff to use and then typically spend a lot of time trying to remind people about the benefit of applying them to files.

In the vast majority of organisations, this approach simply won’t work – let’s face it, staff rarely tag their information (and even when they do, they rarely tag content with any real accuracy). Not only does this manual approach introduce a new responsibility for staff, but, in my experience, it, it tends to result in fairly poor results, with relatively small volumes of content being classified and frequently with incorrect labels having been selected.

That said, I have seen it work in some organisations – but only where staff are already

Manually
is the

acutely aware of the importance of tagging content accurately. For example, you might find that manual application of retention labels is a viable option for a small pharmaceutical company undertaking clinical drug trials, where every ‘i’ is already dotted, and every ‘t’ is routinely crossed.

However, it is worth noting that if your organisation is limited to using E3 licences (or below) and doesn’t have any of the add-on Purview licences, then unfortunately, this manual approach will be your only option for applying retention labels to your content. As such, all of the other approaches detailed in this blog post are only available for organisations with more advanced licences (typically E5 or equivalent).

CUSTOM LOGIC/PROCESSES

The most flexible approach for applying retention labels to files is to introduce your own custom logic. Whether you are using PowerShell scripts, C#, or running Power Automate processes, it’s perfectly possible to call into Microsoft 365’s APIs to apply retention labels to your content.

This approach forms the basis of many of the more reliable records management solutions that I’ve seen introduced across Microsoft

365. However, unfortunately, this approach will invariably require the development (and subsequent maintenance) of bespoke logic or processes. Typically, this makes the overall cost of applying retention labels via custom logic relatively expensive, something that significantly reduces the viability of this approach for many organisations.

My understanding is that applying retention labels programmatically in this way requires your users to have an E5 licence (or one of the compliance add-on licences). Certainly, that was the case when I published the Retention Licensing Update blog in December 2023. However, at the point of writing (March 2025), I can no longer see this specifically mentioned in Microsoft’s guidance around “Licenses for other retention label application methods”.

TRAINABLE CLASSIFIERS FOR APPLYING RETENTION LABELS

Having been around for a several years now, trainable classifiers provide an artificial intelligence (AI) driven approach to automating the application of retention labels. They can be an invaluable tool that allows you to automatically label content, regardless of where it is stored across your SharePoint architecture.

Microsoft provides you with a range of pre-trained classifiers, which you can choose to make use of; however, as these models have not been trained on your own information, you will definitely want to closely verify the results. Instead, most organisations that are exploring trainable

classifiers tend to build their own models, by training the AI on their organisation’s own content.

To build your own custom trainable classifier, you will need a fairly large number of files, which all contain the same type of information. This type of content should ideally have a standard document structure, as this will allow the AI to more accurately determine whether each file should be labelled or not.

You will then need to seed the AI with at least 50 positive examples of this type of file, alongside 150 negative examples. From my testing, you might well want to exceed the minimum number of seed files to help improve the accuracy of the AI. The trainable classifier then takes a day or two to appraise the seeded content, and if it is successful, allows you to publish it across your tenant.

While using AI to automatically apply retention is a bit of a utopian dream for many records managers, there are a few drawbacks that you will probably want to consider. Firstly, to go to the effort of training a model, you will need to have hundreds (or more likely thousands) of files containing the same type of information, and for each of these files to have fairly consistent structures. This could certainly be the case for some of the information assets you possess; however, it’s likely that many of the types of content you store will not be suitable for trainable classifiers. As a result, this approach will probably only be able to apply labels to a minority of the content that you possess.

While using AI to automatically apply retention is a bit of a utopian dream for many records managers, there are a few drawbacks that you will probably want to consider.

In my opinion, the biggest drawback to be aware of is that trainable classifiers only work on content that is less than 6 months old (see: Automatically apply a retention label to Microsoft 365 items | Microsoft Learn). This means that trainable classifiers aren’t suitable for applying labels to legacy content.

SENSITIVE INFORMATION TYPE

Sensitive information types are becoming central to many of the capabilities found in Microsoft Purview. They are used to scan information in your tenant to find personally identifiable information (PII) that is stored in a consistent ‘codified’ format. What I mean by this is that sensitive information types allow you to find content containing codes or IDs such as passport numbers, social security numbers, asset IDs, or credit card numbers.

Where sensitive information types really shine is if you happen to have a unique ID applied to the data that you process. For example, a housing agency might well have files tagged with property IDs and tenant IDs, which could be easily identified (and then labelled) by using a custom sensitive information type.

I should emphasise that the more unique the format of your ID is, the more accurate your sensitive information type will likely be. For example, if your ID is a five-digit number, it’s very likely that the sensitive information type will find (and apply labels to) files containing other five-digit numbers (such as the first half of a mobile phone number). Whereas a 12-digit alpha-numeric code, with a consistent format of characters is far more likely to be unique (and as such, far less likely to be mis-labelled by a sensitive information type).

In the right situation, sensitive information types are superb. However, if you don’t have a unique ID within your content, then the value of this approach is minimal for records managers. Even if you do have a unique ID that you can use, you then need to ask yourself which retention label should be applied to all of the content containing that ID. Very often, you might have multiple information assets that all contain a specific ID, but which need to be retained for different durations. This is why I’ve personally come to the conclusion that sensitive information types are typically much better at applying sensitivity labels than they are at retention.

EXACT DATA MATCH (EDM) SENSITIVE INFORMATION TYPES

EDM provides an alternative way of allowing a sensitive information type to apply labels to your content, by leveraging exact, or near-exact data matching, as opposed to patterns. However, instead of looking for a codified ID, EDM allows you to look for your organisation’s own data when it is stored inside your content.

For example, if you have an HR system containing employee data, or a CRM containing information around your sales opportunities and clients, you can look for the actual data contained within these repositories across your Microsoft 365 content. This approach allows you to find all of the information that you store, which contains, say, the name and date of birth of one of your staff, employee ID, or perhaps the postcode of one of your specific customers, and then use this as a way of applying labels to your content.

In many ways, EDM is an incredibly powerful tool, which can provide far greater accuracy and precision than standard sensitive information types. You aren’t just labelling a file containing a sequence of numbers resembling the format of a phone number, EDM lets you label a file containing one of your customer’s actual phone numbers.

a set frequency to ensure accuracy of the data supplied.

EDM has huge potential, although I suspect in most scenarios it will be far more useful for applying sensitivity labels than it will be for retention. As with standard sensitive information types, it will also only ever be able to label the minority of your files, meaning that it probably won’t form the backbone of your records management architecture in Microsoft 365.

KEYWORD QUERY

Probably the most powerful of the automated approaches provided for applying retention labels is keyword query – a function that Microsoft currently refers to as the ability to “apply labels to content that contains specific words, phrases, or properties”.

In many ways, EDM is an incredibly powerful tool, which can provide far greater accuracy and precision than standard sensitive information types.

The challenge with EDM is that it is very reliant on the data you have available to use – this means that every organisation will need to plan how the existence of specific data will cause the application of which specific labels. This could form part of an internal process where HR systems are interrogated and stored in a format the EDM can reference, which is updated on

This is effectively a tool that allows you to define a series of conditions, which determine which files that retention label is applied to. For example, you could choose to create a query which applies a specific retention label to all of the files in your tenant that contain the phrase “commercially confidential”, use the “Corporate Policy” content type, and have also been tagged with SharePoint metadata indicating that they belong to the “Human Resources” department. As you can imagine, if you are controlling the conditions that you include within the query, you will have full control over precisely which files are automatically labelled.

While the tool is incredibly flexible, it can be a little bit technical to configure, as the conditions need to be defined in Keyword Query Language – as such, frequently some IT assistance is required.

The major challenge that many encounter with this approach is trying to set appropriate conditions. If your conditions are too restrictive, then many files might be excluded from the scope; while conversely, if your conditions are too liberal, then too many items will likely receive an incorrect label. It’s for this reason that keyword query isn’t typically used as the primary tool for applying retention labels.

DOCUMENT PROCESSING WITH SHAREPOINT PREMIUM

SharePoint Premium’s document processing models use AI to automatically find and extract metadata from within your files. For example, you could use document processing to scan purchase orders and automatically extract metadata from them, such as the PO number and total value of the of the goods/services in the order.

The reason I mention document processing is that it can be used to automatically apply retention labels to every file that it successfully identifies. Let’s be honest, you won’t be purchasing document processing to meet your retention goals (you’d buy it for its metadata extraction capabilities). However, if you are planning to use document processing, you should certainly consider how it can apply retention labels to the files that it appraises.

Document processing requires an additional pay-as-you-go licence, which is currently priced at $0.05 per processed page.

DEFAULT LABELS

In my opinion, setting default retention labels on all of the document libraries in your Microsoft 365 tenant is by far the most effective solution for applying retention to your content in Microsoft 365. This approach has been at the heart of many of the most successful retention architectures that I’ve seen.

Once default retention labels have been applied, all users need to do is upload their

files and start working on them. The content then becomes automatically labelled, based on the location that the user has chosen to store it in. Importantly, staff can choose to manually change the default label, if they feel a different label is more appropriate to apply.

The only challenge that this approach presents is for organisations to work out how they are going to set the right default retention label on each of their libraries and folders. Making this process effortless is one of the main reasons Orinoco 365 has been introduced. The product allows you to automatically apply default retention labels that are contextual to the activity being undertaken across every new SharePoint site and Microsoft Team that is requested. Even if a user creates a new document library, Orinoco 365 automatically intercepts it and applies your chosen default retention label for you.

The Author

Having overseen some of the most significant records management solutions in Microsoft 365, Rob Bath has become a well-known information governance expert, who regularly provides his thoughts to the wider community. Currently serving as IRMS Digital Director, Rob has recently launched Orinoco 365, a product that makes it much easier to consistently apply Microsoft’s retention capabilities to your content in Microsoft 365.

< https://orinoco365.com>

Records managers — what are you thinking?

Information and records management (IRM) is evolving into an esteemed corporate discipline. To guide their companies to that future state, records managers need to have not only that vision of the future, but also the right frame of mind.

It is not easy to be a records manager at this time in history, given the evolution of IRM from primarily the management of off-site storage to the governance of records, and now to the governance of official records as corporate assets. IRM is destined to evolve into an esteemed corporate discipline which assures that records are managed in a way that provides for efficient business operation, satisfaction of legal and regulatory requirements, and risk avoidance from having too few or too many records – all at minimal cost. Increasingly, IRM requirements and processes will be monitored vigilantly and audited periodically to confirm compliance. IRM professionals will participate in business development, asking questions such as “What information must be captured and in what form will the official record(s) be established?” They’ll also actively participate in the selection and establishment of business applications, databases, and records repositories, asking questions like “Does the technology facilitate or at least make it easy to store and dispose of official records and/or copies?”

This is the future of IRM – is that what you’re thinking?

Well, you say – “That’s not where I live – no, that’s not what I’m thinking – I have trouble getting individuals to fill out transmittal sheets, or to place official records in the

correct repository, or to understand they shouldn’t keep every record they ever touched. What else should I be thinking?”

Well, understanding the value of IRM, you are the one best able to guide your company to that future state, even while struggling through everyday issues. So what should you be thinking?

You should be thinking:

• I have an essential role to guide and move my company forward in the identification, selection, and implementation of IRM requirements, tools, and processes.

• I have knowledge and understanding that needs to be absorbed and acted upon by employees, not just communicated.

• I need to be a visionary, teacher, coach, motivator, facilitator, guide, encourager, problem solver, and helper.

• Implementation must be complete and comprehensive. Finding an old copy of a record in an obscure location is no different than finding the original posted on a bulletin board. It does no good to be rid of almost all copies.

• No exemptions from requirements. Exceptions may be given for specific reasons, and for specific periods of time, with identified corrective action, but no exemptions.

• We don’t do IRM to satisfy legal and regulatory requirements. IRM requirements, tools, and processes are efficiency improvements that enable the company to be more profitable and the employees more productive. We do IRM to efficiently run the business, make money, and satisfy our customers. Along the way, we satisfy legal and regulatory requirements also.

Implementation of a comprehensive IRM governance program is a cultural change –culture shock for some. It changes what, and therefore who, is valued. Commonly, individuals are found within companies that have ‘kept it all’ and gained a reputation as the ‘font of knowledge’. With the arrival of professional IRM management, these employees are required to give up their personal historic archive, placing official records in an authorised repository, and disposing of convenience copies when they have no personal need of them or the retention requirement has passed. Cultural change is often the cause of pushback and resentment. When individuals argue or complain about the smallest details or insignificant points – look to the impact on them or their job for the real source of difficulty.

The Author

Craig Grimestad is a senior consultant with Iron Mountain Consulting. His specialty is designing RIM core components with subspecialties for RIM auditing and change management. He holds an MSc in engineering and was the Records Manager for the Electro Motive Division of General Motors, where he participated in the development of the GM Corporate RIM program, and implemented and managed Electro Motive Division’s RIM program.

<www.ironmountain.com/uk/resources> <www.linkedin.com/in/craig-grimestad2214b37>

Evaluating the current state of AI throughout UK local government

Local government is being firmly encouraged to accelerate the adoption of artificial intelligence (AI) in a bid to improve efficiency and productivity. While the potential is compelling, not least the chance to empower existing staff with the tools and insight required to meet escalating service demands, perspective is key.

Following technology exploration, with significant prototyping and ideation, as impactful implementation begins to gather pace throughout local government, tried and trusted processes for both service delivery and application development cannot be overlooked.

AI will deliver innovation at a new pace, but it is important to step back from the hype and accurately assess what can safely be achieved today. Rick Hassard, Director of Engineering and member of the AI Board at Idox explains the importance of local-authority-specific AI development and close collaboration between suppliers and local authority experts, especially within regulated areas such as planning.

INCREASING PRESSURE TO ADOPT AI

AI is on every local authority agenda. With a projected £8 billion funding black hole by 2028/29 and the threat of Section 114 notices adding further strain, there is no doubt that change is required. Every aspect of local government spending is facing review, and with service demands increasing, especially in areas such as planning and social care, local authorities are experiencing enormous pressure to deliver more with less.

The shortage of skilled staff is adding to the challenge. While skill shortages vary depending on each council and region, the LGA’s workforce survey suggests recruitment difficulties in

The potential is clear; local government experts using the latest iterations of GenAI tools, including LLMs, achieve better quality, speed and reduced costs.

occupations such as planning, legal, digital, environmental health, finance, and children’s and adults’ social workers. The issue is particularly pressing within planning, with around eight out of 10 planning departments short of staff.

With only 20% of departments having enough planners to handle applications, it is inevitable that only one in five applications for major projects were decided within the 13week statutory period in the past 3 months. The government has pledged to recruit 300 council planners, but the study points out that this only plugs 15% of the shortfall.

These issues would be challenging within a stable environment. For local authorities facing an increase in total expenditure on adult social care, government targets to build 1.5 million new homes and a shift towards devolution, where is the time or resources required to achieve digital transformation or embrace innovation such as AI?

AI’S POTENTIAL TO IMPROVE EFFICIENCY AND PRODUCTIVITY

The government is firmly committed to the use of technology to improve performance, efficiency and productivity across both central and local government. Every week seemingly brings a new announcement of AI adoption in an array of areas, with pothole management, traffic

management, benchmarking attendance in schools and minute taking in meetings just a few examples. The objective in each case is to improve the performance and efficiency of existing staff by streamlining processes and removing mundane tasks.

One of the biggest areas of focus is, of course, the planning sector. Recently, the government announced the development of the Extract AI assistant to support faster planning decisions. In test trials across Hillingdon, Nuneaton & Bedworth, and Exeter councils, Extract digitised planning records, including maps, in just 3 minutes each – compared to the 1–2 hours it typically takes manually. This means Extract could process around 100 planning records a day – significantly speeding up the process.

The potential is clear; local government experts using the latest iterations of GenAI tools, including large language models (LLMs), achieve better quality, speed and reduced costs. The Tony Blair Institute worked with a local council in the UK and estimated that AI could be applied to 26% of tasks, resulting in a saving of 1 million work hours, or £30 million in financial costs, each year. Scaled up nationally, this would mean around £8 billion in cost savings.

COLLABORATIVE APPROACH TO AI EVOLUTION

This is not a one-way process. It is essential that local government is an active participant in the AI evolution to ensure products are developed that not only meet operational needs but do so safely and effectively. Local authority experts understand the regulation and how systems are used within that regulatory framework. Their input is key to ensuring AI is directed towards the areas that add quantifiable value to existing staff.

Environmental Health inspectors, for example, are in short supply, so it makes sense to use AI to support an intelligenceled, risk-based approach to inspections that maximises the value of these skilled experts. Similarly, within planning, there are areas of work that do not require a skilled building control surveyor or planning officer and the use of AI to streamline and automate these processes can release skilled experts to focus on more complex, valuable activity.

Close collaboration between software developers and local authority experts will accelerate the development of innovations that can be deployed at scale. In addition to confirming the priority areas for development, human oversight from local authority experts, sanity checking the veracity of AI responses, for example, are key to building both accuracy in the model and confidence in the technology. Furthermore, working with existing technology partners with an extensive understanding of and commitment to the UK local government environment will also safeguard valuable data resources.

CURRENT STATE OF AI EVOLUTION

But where do these pilot and scoping developments leave local authorities? There is a gap between a concept or pilot and widescale

This is not a one-way process. It is essential that local government is an active participant in the AI evolution to ensure products are developed that not only meet operational needs but do so safely and effectively.

operational use. Are these solutions available for deployment? Can they be used alongside existing systems? What integration is required?

How much will it cost? Going from proof of concept to production requires investment – and how is that investment to be funded? Where is the business case, the proven value, even the trusted deployment partners?

While consumer GenAI tools are being subsidised by the tech industry, the reality is that AI can be costly to develop and implement. In addition, rising concerns about hallucinations, bias and fairness can undermine confidence both amongst end users and, potentially, citizens who will be worried about the implications for service delivery. This is of particular importance in areas driven by regulation, including planning and public protection. From assessing planning applications to managing health inspections, it is vital to be 100% confident in the accuracy and relevance of information provided by AI before any widescale deployment can be achieved.

Local government needs a voice. Local authorities need to understand where and how central-government-driven projects with tech suppliers fit into digital transformation strategies. They need clarity about the risks and rewards associated with data ownership and usage, and the difference between generic AI tools and those designed specifically for UK local government. This is where existing suppliers have an essential role to play in reassuring local government that AI innovations are being built to meet specific, UK-relevant requirements.

A VOICE FOR LOCAL GOVERNMENT

From Idox’s expertise and years of experience, we know where innovative technologies fit into existing systems. Technologies such as retrieval-augmented generation (RAG), which allows LLMs to retrieve and incorporate new information, and help prevent hallucinations by grounding information in trusted data sources, such as planning-specific information, to elevate relevance and accuracy.

Or sentiment analysis of local authority data –whereby the meaning embedded within the text is assessed to gain greater insight into emotional context – which can be used to gain customer feedback and improve customer service. Within planning, for example, sentiment analysis of resident responses to planning application statements can provide immediate insight into whether the perception is negative or positive.

At the heart of successful AI adoption is ensuring local government has access to the right AI models that reflect the specific needs of the UK government. Rather than generic LLMs or those trained on government data from Australia, New Zealand, Canada or the US, LLMs specifically trained on UK government data and developed according to UK regulatory requirements are key to delivering the accuracy and certainty required, especially within regulatory areas.

CONCLUSION

The excitement surrounding AI is not misplaced. There are undoubted opportunities to address many of the challenges currently facing local government, not least supporting existing staff to deliver more with less. However, despite the extraordinary pace of change, there is no need to feel rushed into investment.

Indeed, the public sector has the opportunity to learn from the often painful lessons learnt from private sector companies that have rushed headlong into AI development without the essential checks and balances. Aligning development with clearly understood goals is the priority, but companies that have relied on inaccurate data and failed to involve humans in the development process have paid a hefty price.

The Author

Rick Hassard joined Idox in 2019 through the Tascomi acquisition, bringing over 20 years’ experience in driving technological innovation. With a background spanning both local government and the private sector, he applies deep sector knowledge to develop solutions that truly meet user needs. A champion of collaborative leadership, Rick led Idox’s migration to the cloud, unlocking transformative technologies for both the company and its customers. As a founding member of Idox’s AI committee, he is laying strong foundations in emerging technologies, ensuring Idox continues to deliver smarter, future-ready products that empower customers and the communities they serve.

<www.idoxgroup.com/contact-us>

Integrity data breaches: What they are and how to manage them

In today’s digital age, the frequency and severity of data breaches continue to rise, and they are a growing concern for businesses of all sizes.

The number of data breaches reported to supervisory authorities (SAs) – meaning that they have been assessed as posing a risk to the rights and freedoms of the individuals concerned – increases every year, as highlighted in the SAs’ annual reports. For example, the Data Protection Commission (DPC), the Information Commissioner’s Office (ICO) and the Spanish Data Protection Agency, all reported increases of 11%, 6% and 46%, respectively, in their latest annual reports, compared to the year before.

Confidentiality and availability data breaches, as categorised in “Opinion 03/2014 on Personal Data Breach Notification of the Article 29 Data Protection Party ”, are most commonly encountered and reported in most organisations.

With such breaches, unauthorised individuals gain access to sensitive data and/or there is an accidental or malicious unauthorised loss of access to, or destruction of, personal data. However, another critical category of data breaches is the integrity breach, where an unauthorised or accidental alteration of personal data occurs. In Ireland, integrity breaches concerning accidental/unauthorised alteration of personal data represented 10% of the total breaches reported to the DPC in 2024, as highlighted in its annual report .

This article aims to provide clarity around integrity data breaches, inform the reader of the associated risks, and explain how to prevent and manage this type of data breach.

With such breaches, unauthorised individuals gain access to sensitive data and/or there is an accidental or malicious unauthorised loss of access to, or destruction of, personal data.

WHAT IS AN INTEGRITY DATA BREACH?

Article 5(1)(d) of the General Data Protection Regulation (GDPR) places the obligation on data controllers to ensure that the personal data they process is “accurate and, where necessary, kept up to date”. The unauthorised alteration of personal data, either intentionally or unintentionally, in a way that compromises its accuracy, completeness or authenticity, represents an integrity data breach. Such alterations can be a consequence of:

• human errors – accidental alterations, such as incorrect data entry or unintentional data modification

• software issues – unintended changes or corruption of data, as a result of bugs in software updates or patches

• cyberattacks – malicious tampering with personal data for fraudulent purposes

• improper data synchronisation –when systems are not synchronised appropriately, discrepancies in data can lead to integrity issues across platforms

POTENTIAL CONSEQUENCES OF INTEGRITY DATA BREACHES FOR ORGANISATIONS

The unauthorised alteration of data can pose severe risks and can have significant consequences for organisations, such as:

1. Regulatory fines: An organisation’s failure to comply with Article 5 (1)(d) of the GDPR – that is, to ensure that the personal data it processes is accurate and up to date, can lead to penalties or fines.

2. Operational disruptions: Accurate data is critical for the appropriate functioning of organisations. Business operations can be significantly disrupted if data is incorrect, incomplete or inconsistent. In many industries, such as healthcare and finance, data integrity issues can lead to operational and service delivery inefficiencies, which can disrupt processes and damage relationships with stakeholders and clients. Furthermore, the breach investigation and implementation of mitigating actions following an integrity data breach can take a long time, depending on severity. In some cases, operations may need to be paused during the investigation, which can lead to loss of revenue.

The reputational damage caused by data breaches can be long lasting and can impact an organisation’s ability to gain future investment, to obtain new customers and retain existing ones.

A well-known example of an integrity data breach occurred in the UK in 2020 within the NHS Test and Trace system during the COVID-19 pandemic. This case highlights the risks, impact and consequences such incidents can have, where a data processing error led to the failure to report more than 15,000 positive COVID-19 test results. This breach delayed critical contact-tracing efforts and had public health implications, as infected individuals and their contacts were not promptly informed. The incident demonstrates how data processing errors can compromise the integrity of data, damage the reputation of the organisation involved, and lead to serious consequences for public health and operational efficiency.

HOW TO PREVENT INTEGRITY BREACHES

3. Reputation damage and loss of business: Partners and consumers trust businesses to handle their personal data with integrity. The reputational damage caused by data breaches can be long lasting and can impact an organisation’s ability to gain future investment, to obtain new customers and retain existing ones. In the healthcare industry, for instance, an integrity data breach can lead to severe consequences for data subjects and, as a result, the organisation’s reputation would be greatly challenged.

4. Legal liabilities: Integrity data breaches may also expose organisations to lawsuits or legal action, where the alteration of data causes harm to individuals.

Given the risks associated with integrity breaches, organisations must take proactive steps to prevent them and have appropriate technical and organisational measures in place.

Examples of prevention measures include:

1. Employee training is a crucial step in preventing data integrity breaches. Staff should be educated on data protection principles, the importance of data accuracy and the potential consequences of accidental alterations of personal data. The delivery of regular and effective staff training and awareness is an essential component of an effective data protection programme in general, and incident management frameworks in particular.

Organisations should implement appropriate technical and organisational measures to ensure compliance. Such measures include the implementation of a robust data breach policy, procedure and risk assessment methodology.

HOW TO MANAGE INTEGRITY BREACHES

The GDPR requires data controllers to be able to identify data breaches promptly, determine the likelihood and severity of risk and adverse effects, and to determine whether notification obligations arise. The European Data Protection Board Personal Data Breach Notification guidelines specify that in addition to, and separate from, the notification and communication of breaches under the GDPR, integrity breaches may also trigger obligations under other regulations, such as the eIDAS Regulation (EU 910/2014), which governs electronic identification and trust services in the EU. Article 19(2) of the eIDAS Regulation requires trust service providers to notify their supervisory authority of any breach of security or loss of integrity that impacts the services provided or personal data maintained.

2. A robust data governance framework , which includes clear guidelines and policies on data accuracy and data management throughout the organisation. This is essential for ensuring reliability and integrity of data and for preventing unauthorised or accidental alterations to data.

3. Regular data quality checks and data validation can help identify potential inconsistencies or errors in datasets and environments before they lead to integrity breaches.

4. Regular security audits should be conducted to identify vulnerabilities in data management systems. These audits should assess whether systems are secure against tampering or alteration that could impact data integrity and quality.

5. Access controls should be put in place to ensure that personal data alterations are restricted to authorised personnel only.

Organisations should implement appropriate technical and organisational measures to ensure compliance. Such measures include the implementation of a robust data breach policy, procedure and risk assessment methodology.

The Author

Diana Dinu is a Data Protection Advisor and Project Manager at Trilateral Research. In her role in the Data Protection and Cyber Risk Team, Diana supports the provision of high-quality services to clients, with a special focus on data breach and incident management and project/client management.

<https://trilateralresearch.com>

Data definitions — what is transaction data?

This paper is the third in a series that looks at data management and what is meant by terms such as “master data”, “reference data”, “transaction data” and “aggregated data” – terms that information and records management professionals may come across but may not use on a daily basis. The final paper in the series will look at “data as an asset”.

Transaction data and the relationships between transaction data and master data is a recurring topic for coffee-break pitches. I’ll put my explanation out there with the hope that it will help or at least spark a discussion.

DEFINITION

Transaction data is either the result of a repeatable action (a business process) or an event. Transaction data are always characterised by the point in time an action or event takes place.

Transaction data is created by business processes and can be described by nouns that can turn into a verb. Or, if the data is the result of an event, by the source and/or trigger of the event.

• Transactions link master data entities and often reference data (lists).

• An event could be “a tweet” (from an X (formerly Twitter) account) or “a call” (which has a caller ID).

• There is no upper limit to the number of transactions.

• The number of transaction records is higher than the number of master data records.

In summary, transaction data happens at a point in time, links master data entities, and can often include reference data.

EXAMPLE I

As mentioned, transaction data refers to at least one master data entity. For example, “a sale” will at least link to the master data entity “product” (if it is an over-the-counter sale) and might also connect to an “employee” (a cashier). If the procuring party isn’t anonymous, the sale will also link to a “customer”.

Reference data might also connect, eg, “payment terms”, “delivery method”, and “currency”, to a sale, that is, three reference data entities/sets/lists/tables.

The reference to master data entities will only involve a few attributes from the master data entity. The hundreds of attributes that could characterise “product” and “customer” are not included on the invoice.

Some data are particular to the transaction. These are, eg, the number of items sold, the total cost for all items on the invoice, and the date of the transaction. These are the unique transaction data.

EXAMPLE II

Some transactions use a lot of reference data in addition to the master data entities. Export/ import across customs zones exemplifies this. An example from an import declaration:

• The product’s customs code (reference data)

• The authorisation under which the product is released (reference data)

• The origin country, transit countries, and destination country (reference data)

• The mode of transportation (reference data)

• The carrier and the interim storage locations (master data)

• The identification of the producer, product, recipient, and customer (master data)

• The number of products, summarised value, and relevant dates (transaction data)

The transaction example shows how few new data the transaction itself generates.

TYPICAL ISSUES

Errors in the master data entities surface when the business processes are executed. If the master data is wrong, the transactions that consume it will be wrong too. The resulting errors range from small delays, annoying incidents, and major errors to outright catastrophes.

Buying a power drill in your local DIY store will usually involve a customer service desk if the product (or this week’s discount) is not registered in the product database.

Wrong parts specifications can cause plane crashes.

If the particular transaction data is wrong, the data, number of items, or total cost will suffer – minor issues, but annoying, especially if the errors are repeated.

The Author

Niels Lademark began his career as a Master of Agriculture but soon realised his mistake and changed to the IT industry. The first stop was working with data governance back in 1997, before the term was coined. As the master data manager at the University of Copenhagen, he was responsible for collecting and reporting the total research and conveyance of science done, from papers to museum exhibitions, in one unambiguous format. After a 5-year tenure at the Danish State Railways as information and enterprise architect, he joined a dedicated information management consultancy in 2007, and later Deloitte after a merger. The project tally as a management consultant ended at 28, after 14 years of consulting. All these projects revolved around enterprise architecture or information management. Currently, he works as an enterprise architect at the Nordic Regional Coordination Centre (Nordic RCC), which calculates the electric grid capacity across The Nordics to maximise the utilisation of green electricity production capacity.

<nlh@nordic-rcc.net> <linkedin.com/in/nielsheegaard>

Swapping a trowel for a taxonomy

I often feel like a bit of an atypical member of the IRMS. I say ‘atypical’, because unlike most of the society’s members, I’ve never worked as an information governance (IG) or records management professional. That said, our society is a diverse and inclusive community, comprising members with a broad range of overlapping interests – while I’m a bit of an outsider, you’ve all certainly made me feel welcome!

So, how has my career brought me to where I am?

If we rewind to the late 90s (you might recall a time when people were optimistic and music featured more than three chords) – a younger version of myself, complete with mandatory long hair, started thinking about the path I wanted to take. Thinking about my career wasn’t exactly at the top of my mind at that stage, so I didn’t fill my parents with a huge amount of confidence when I announced that I was going to study archaeology at university.

After graduating, I joined Meridian Television’s IT team in a junior position. The role wasn’t glamourous and had a steep learning curve, but it was certainly exciting at times (highlights include helping Esther Rantzen recall her password and drinking with Nick Knowles). Despite the fun, I decided to postpone my foray into the workplace by returning to university for a Master’s in Information Studies. The course proved to be a bit of a hybrid blend of IT and information management (IM), which covered a range of topics from Java development to the Data Protection Act.

Still having visions of becoming a modern-day Indiana Jones, my first real career step saw me join the Council for British Archaeology as a bibliographer. I was quickly given a creaking Microsoft Access database to look after (shattering any lingering illusion that I’d been appointed on the basis of my archaeology degree). I look back fondly on the role – which granted me access to some of the finest archaeological libraries in the world – including the prestigious collections of the Society of Antiquaries in Burlington House.

Now, it might come as a surprise to many of you, but archaeology isn’t exactly a lucrative career path. After a couple of years, reality caught up with me, and I opted for a career in digital technology instead. While I’ve now been in IT for more than 20 years, I admit that I still secretly

hope to be able to return to the archaeological world at some point before my career ends.

In 2003, I joined Hampshire County Council and began to learn the ropes of being a developer. While still very inexperienced, I gained knowledge of how to programme in an array of different technologies including .NET and SQL Server.

Still having visions of becoming a modernday Indiana Jones, my first real career step saw me join the Council for British Archaeology as a bibliographer.

After a period of travelling, I transitioned into the private sector, by joining a small IT consultancy. I couldn’t have made a better decision. The pace was intense, and at times felt a bit of a sink-orswim experience, but, somehow, I emerged with my head above water. I think it was the variety of the work that I loved most – and the variety of the organisations I got to work with. From large companies like PepsiCo and Nationwide, to public bodies, including DEFRA and the Home Office, every project felt like its own adventure.

I think it was in 2006 when I first became aware of an emerging technology called SharePoint. At the time, I didn’t really know what it was, but I certainly understood that there was a huge amount of demand. I quickly retrained and was soon building intranets, document management solutions and even a few websites (those were the days!). I got involved in every part of the process, from database migrations to building custom web parts. I guess donning rose-tinted spectacles is part of the process of reminiscing, but I certainly have fond memories of getting my hands dirty with early versions of SharePoint.

Now I admit, I was never a natural developer. Don’t get me wrong, I wasn’t awful – more adequate at best. This became clear to me during an intranet project, when I was assigned to work alongside an engineer from India. My colleague was an exceptional developer, far better than I was, and on a fraction of a UK wage. By the end of the project, I’d come to realise that my skills were far better suited to communicating

the technology than to development. So, I opted to swap coding for consultancy and haven’t looked back.

The size of the projects I was overseeing began to increase steadily; from British Gas’ intranet, to rebranding Vodafone’s global Virtual Town Hall solution. However, it wasn’t until a project for Kew Gardens in 2015 that I really began to specialise in IM in Microsoft 365. The project was to introduce a records management solution, and it required me to learn about unfamiliar concepts, including retention schedules, EDRMS and MoReq.

The following engagement saw me direct a complex information audit for the College of Policing. After that I helped with the definition of the Home Office’s retention architecture in Microsoft 365. Recognising a pattern, I joined IRMS in 2018 (little suspecting that only 2 years later I’d be volunteering as Digital Director).

At around the same time, I joined Intelogy to oversee their fledgling knowledge and IM practice. In the past 7 years, I’ve grown the practice, to a point where we are amongst the global leaders in Microsoft Purview. I’m really proud of the exceptional projects we have provided in this time, from defining the architecture used by the University of Oxford; developing a Microsoft Teams provisioning process for the former Department for Business, Energy and Industrial Strategy (BEIS); to guiding the Microsoft 365 strategy for the Organisation for Economic Co-operation and Development (OECD).

One of my recent career highlights was being invited by the National Archives to oversee the development (and subsequent maintenance) of the Information Governance Maturity Model for Microsoft 365. This framework provides everyone with a freely available tool to help understand what ‘good practice’ looks like from an IG/IM perspective.

I’m also especially proud to have overseen the creation of Orinoco 365, a product that is designed to make it much easier to use the retention capabilities found in Microsoft Purview. The solution is now used as the cornerstone of the Microsoft 365 architectures of multiple organisations having successfully controlled the creation and configuration of tens of thousands of SharePoint sites and Microsoft Teams.

Quo vadis?

I guess it’s part of the excitement of a career that none of us know what twists and turns lie in wait. I’d like to think that I’ll continue to be able to provide up-to-date technical guidance in my blog posts and presentations. I’m especially keen to be able to continue to monitor the progress of artificial intelligence and understand where it will support (and potentially where it might hinder) the IM community. So, who knows where the road will lead – but I’m certainly excited by the journey and the people I get to learn from on the way.

The Author

Having overseen some of the most significant records management solutions in Microsoft 365, Rob Bath has become a well-known information governance expert, who regularly provides his thoughts to the wider community. Currently serving as IRMS Digital Director, Rob has recently launched Orinoco 365, a product that makes it much easier to consistently apply Microsoft’s retention capabilities to your content in Microsoft 365.

< https://orinoco365.com>

Mind the gap: Navigating records management terminology as a new professional

In my short professional experience, I often encounter misunderstandings of records management terminology.

This has meant that I automatically assume a lack of understanding of records management terminology, making sure to define terms throughout my interactions with staff at my work. I have observed that this has also had an impact on the understanding and perception of records management. This sparked my interest in how terminology affects records management in the workplace, which led me to investigate this through a case study for my MARM dissertation. I focused on the comprehension of key records management terms, where substantial gaps were present. This article combines my professional experience and research to explore these challenges and propose practical solutions.

EXAMPLES OF TERMINOLOGY WITH ISSUES

Record

While ISO 30300 and ISO 15489 provide a standard definition of ‘record’ and an ‘authoritative record’, I have found myself debating with fellow professionals about how we apply this. This is in parallel to the academic literature attempting to comprehend the boundaries of a record in the modern age. Traditional, analogue records had to be actively acknowledged as records by gathering metadata, including the content, context and structure, but electronic files automatically have metadata

attached and maintained throughout their existence. This blurs the lines between static electronic information and records. Dynamic digital records, such as live systems, are a challenge, as they are variable throughout their use and records are thought to be fixed from creation.

Similarly in IT contexts, these records are often referred to as ‘data’, further complicating the ability to identify them, and thus, manage them. The convergence of the terms ‘record’, ‘information’ and ‘data’ demonstrates the difficulties in an international standardisation for the definition of a record, particularly in the digital format. My case study demonstrated this lack of clarity, as the majority of respondents struggled to identify both records and born-digital records; a high percentage of participants interpreted records as draft reports, digitised records and raw data.

Archiving

This is a particularly problematic term I have come across in practice. Although this term is not challenged by professionals or the literature, in IT, ‘archiving’ relates to the storage of data from systems that is no longer required for active use but requires maintenance. In my experience and the case study, archiving is viewed as the records management activity of semi-current

storage, rather than its preservation for historical purposes. This poses significant challenges for implementing retention schedules and policies.

Record life cycle

Whilst there are differing models, each presents the stages of a record through its creation, use and disposition. The case study identified that confidence in defining the term was significantly less than the actual comprehension, suggesting that there is a basic understanding of the stages of a record when presented with examples, irrespective of confidence and ability to articulate it. This unconscious grasp of records management principles, while not ideal, provides a foundation we can build upon.

Retention schedule

Knowledge of the retention schedule in my time working in records management has varied dramatically. Half of study participants demonstrated a basic understanding of ‘retention schedule’. Notably, many overlooked the possibility of archival preservation, assuming all records eventually face destruction. This is something I consider prominent in practice, as many staff don’t seem to have the knowledge of an archive at all, let alone its enduring longterm value. This exposes the implications of a poor understanding of records management terminology, such as improper application of instructions to archive and the inability to identify records in order to apply appropriate retention.

Addressing the issue of terminology

To overcome these terminological barriers, we must first acknowledge a fundamental weakness in how we communicate these concepts to our organisations. Although the profession maintains general standardisation of key records management terms, it is evident that this is jargon to many and acts as a barrier to good records management practices. Alienation of users due to poor communication of terms results in a lack of interest and engagement with records management. Varying interpretations of terminology leads to inconsistent practice, impacting efforts to promote good records

management. Cross-departmental challenges arise, such as communication issues and excluding records management from IT initiatives such as system design.

I believe the solution lies in adapting our professional terminology to align with institutional contexts while maintaining clarity about core concepts. I specify ‘historical archive’ instead of just ‘archive’, which has improved the understanding of preservation requirements and distinction from semi-current storage. By focusing on business value when providing a definition, I believe it is easier to aid the identification of records, especially in digital environments. For cross-departmental issues, such as collaborating with IT, I find myself translating between ‘data management’ and ‘records management’ to ensure effective communication.

Creating a comprehensive, institution-specific glossary offers a practical path forward. This tool would bridge the gap between records management professionals and staff members, while acknowledging and incorporating existing organisational terminology. By adapting our communication to organisational needs while maintaining professional standards, we can foster a better understanding and compliance with policies, ultimately strengthening information culture and records management practices across the organisation.

The Author

<carys.hardy@irms.org.uk>

UK experiencing four ‘nationally significant’ cyberattacks every week

In its latest Annual Review, the UK’s cyber agency, part of GCHQ, has revealed that the cyber threats facing the UK continue to escalate. The National Cyber Security Centre (NCSC) dealt with 204 ‘nationally significant’ cyberattacks against the UK in the 12 months to August 2025 – a sharp rise from 89 in the previous year.

Of a total of 429 incidents handled, 18 were categorised as ‘highly significant’, meaning that they had the potential to have a serious impact on essential services. This marks an almost 50% increase on incidents of this second-highest level categorisation compared with the previous year, and an increase for the third year running.

A substantial proportion of all incidents handled by the NCSC last year were linked to advanced persistent threat actors – either nation-state actors or highly capable criminal groups.

Dr Richard Horne, Chief Executive of the NCSC, said:

“Cybersecurity is now a matter of business survival and national resilience.

“With nearly half the incidents handled by the NCSC deemed to be nationally significant, and a 50% rise in highly significant attacks on last year, our collective exposure to serious impacts

is growing at an alarming pace.

“The best way to defend against these attacks is for organisations to make themselves as hard a target as possible.

“That demands urgency from every business leader: hesitation is a vulnerability, and the future of their business depends on the action they take today. The time to act is now.”

In response to the rising threat and in the wake of high-profile cyber incidents, the government has written to chief executives and chairs of leading businesses – including all FTSE 350 companies – highlighting the importance of government and business working hand in hand to protect the UK economy and make cyber resilience a board-level responsibility.

Capita fined £14 million for data breach affecting over 6 million people

The Information Commissioner’s Office (ICO) has issued a fine of £14 million to Capita for failing to ensure the security of personal data related to a breach in 2023 that saw hackers steal millions of people’s information.

Capita plc has been fined £8 million and Capita Pension Solutions Limited has been fined £6 million, giving a combined total of £14 million.

The cyberattack took place in March 2023. The personal information of 6.6 million people was stolen, from pension records and staff records to the details of customers of organisations Capita supports. For some people, this included sensitive information, such as details of criminal

records, financial data or special category data.

Capita Pension Solutions Limited processes personal information on behalf of over 600 organisations providing pension schemes, with 325 of these organisations also impacted by the data breach.

The ICO investigation found that Capita had failed to ensure the security of processing of personal data, which left it at significant risk, as well as lacking the appropriate technical and organisational measures to effectively respond to the attack.

John Edwards, UK Information Commissioner, said:

“Capita failed in its duty to protect the data entrusted to it by millions of people. The scale of this breach and its impact could have been prevented had sufficient security measures been in place.

“When a company of Capita’s size falls short, the consequences can be significant. Not only for those whose data is compromised – many of whom have told us of the anxiety and stress they have suffered – but for wider trust amongst the public and for our future prosperity. As our fine shows, no organisation is too big to ignore its responsibilities.

“Maintaining good cybersecurity is fundamental to economic growth and security. With so many cyberattacks in the headlines, our message is clear: every organisation, no matter how large, must take proactive steps to keep people’s data secure. Cybercriminals don’t wait, so businesses can’t afford to wait either – taking action today could prevent the worst from happening tomorrow.”

Small businesses to receive cybersecurity boost with new toolkit from experts

The National Cyber Security Centre (NCSC), part of GCHQ, has launched the Cyber Action Toolkit: a single destination for sole traders, micro-businesses and small organisations to start building their cyber defences.

Recent figures show that 42% of small businesses reported cyber breaches in 2024, while 35% of microbusinesses faced phishing attacks. Despite this threat, many small businesses struggle to know where to start with achieving cyber protection against the most common threats.

In its latest annual review, the NCSC warns that every organisation with digital assets is a potential target to cybercriminal attackers. The NCSC CEO Dr Richard Horne is urging all businesses to ‘act now’ to build the UK’s collective resilience and close the widening gap between the rising pace of the cyber threat and our current capabilities.

Archive Sector Survey 2025

This survey was conducted by The National Archives (TNA) in partnership with National Records of Scotland, the Scottish Council on Archives, the Welsh Government, the Archives and Records Council Wales, and the Public Record Office of Northern Ireland.

TNA received 330 responses from across England, Wales, Scotland and Northern Ireland, and this included a vast range of services, such as local authorities, universities, charities, museums and galleries, schools, businesses, and arts organisations.

Jonathon Ellison OBE, Director of National Resilience at the NCSC, said:

“In our digital-dependent society, it is vital that businesses take responsibility for their cybersecurity to defend against and recover from common cyberattacks. However, we know it can be hard for sole traders and small businesses to know where to begin.

“That is why we have designed the Cyber Action Toolkit: a personalised, user-friendly entry point that equips you to start building a strong cybersecurity foundation.

“Every step taken makes the UK more resilient, and by putting this toolkit into practice, you can work towards making yourself and your business more confident and capable.”

The survey partners commissioned the consultants Kazky to analyse the results of the survey and produce a UK-wide report. There are also separate reports for local authorities, Wales, Scotland, Northern Ireland and each region of England. The UK report and Wales report are both available in Welsh.

To view the full findings, visit <https://www. nationalarchives.gov.uk/archives-sector/ourarchives-sector-role/annual-uk-archive-sectorsurvey/findings-from-previous-surveys/archivesector-survey-2025/>

Scottish Archives and Records Year in Review 2024–25

The Scottish Council on Archives have published the latest edition of the Archives Review, celebrating the work and projects being undertaken in collections across the country.

Submitted articles include the work of Scotland’s Business Archives Surveying Officer, the Following the Fish project from the Highland Archive Centre, celebratory initiatives from the Fruitmarket and Edinburgh City Archives,

an investigation into historic thefts from the National Records of Scotland, moves from Aberdeen City & Aberdeenshire Archive and the National Trust for Scotland, and the Scottish Women Waging Peace project from the National Library of Scotland. This is only a brief snapshot of the articles this year’s Review contains.

Please click here to read the review.

Using AI to manage the digital heap

Earlier this year, the Government Digital Service (GDS) launched the AI Playbook for the UK government and alongside this published the AI Insights series, a collection of articles designed to help government departments and public sector organisations implement AI solutions safely, securely, and effectively.

It is crucial to address both the opportunities and the practical challenges of AI implementation in government (AI Opportunities Action Plan 2025). This new series builds on the AI Playbook for the UK Government by diving deeper into the more technical aspects of a wide range of AI technologies, with a focus on practical implementation in government settings.

One of the latest in this series is “Using AI to manage the digital heap”, which is highly relevant to the work undertaken by records and

information managers. The guide aims to help you to integrate AI and data science techniques into your organisation’s content lifecycle management approach.

It sets out:

• an approach to targeting and planning AI and data science interventions in lifecycle management

• key use cases for AI and data science in lifecycle management

• advice on ensuring human control of AIenabled lifecycle management

To read the full guide visit <https://www.gov.uk/ government/publications/ai-insights/ai-insightsusing-ai-to-manage-the-digital-heap-html>

IRMS and AIIM announce formal collaboration to advance information management excellence

The Information and Records Management Society Ltd (IRMS) and the Association for Intelligent Information Management (AIIM) are pleased to announce a formal collaboration aimed at delivering enhanced value to information management professionals across both organisations’ global memberships. This strategic partnership combines AIIM’s technology-focused expertise with IRMS’s governance and records management best practices to create comprehensive professional development opportunities.

The collaboration officially launched in October and brings together complementary strengths to serve the broader information management community:

* AIIM contributes technology-focused training, certification programs, and expertise in intelligent information management systems

* IRMS provides established best practices in information governance, records management, and professional development frameworks

“We are excited to embark on this collaboration with IRMS to provide technology-focused content, certification, and best practices in intelligent information management,” said Tori Miller Liu, CIP, AIIM President & CEO.

“We are looking forward to sharing the benefits of this powerful collaboration with our membership,” added Jaana Pinnick , MSc (Dist.), FIRMS, AMIRMS, CDMP, IRMS Chair

The partnership addresses the evolving needs of information professionals who must navigate both traditional governance requirements and emerging technological capabilities in artificial intelligence, automation, and digital transformation.

IRMS and AIIM look forward to exploring future collaboration opportunities that will deliver ongoing value to their respective members.

We are excited to embark on this collaboration with IRMS to provide technology-focused content, certification, and best practices in intelligent information management

Update on our work to raise data protection standards in the public sector

This summer, I wrote to the Cabinet Office, insisting that the government must go further and faster to ensure Whitehall, and the wider public sector put their practices in order.

We’ve been working with government to improve the protection of people’s data for the past three years, and further progress was a key action my office undertook following the Ministry of Defence data breach that disclosed details of Afghan citizens.

I’m pleased to update that the government has now set out the measures they will take to raise information security and data protection standards. The commitments include creating a central, coordinated approach for managing cross-government data protection accountability and compliance; establishing a dedicated team who will set consistent standards and

respond swiftly to risks; rolling out new information management training for all civil servants.

Alongside this, we are working on a joint commitment, in the form of a memorandum of understanding, that will explain how we will collaborate with government so its ambitions to use new technologies to transform public services, create a modern digital government, and drive economic growth are done so with the appropriate safeguards in place. We will agree with government on how my office can receive assurance on the delivery and impact of this work.

This is a single step forward, but it is a crucial one. Government must now carry through on these commitments, to ensure the public can trust and be confident when sharing their personal information with government, knowing that it will be handled responsibly and safely.

I will continue to update you on how that work progresses.

Book launch: Pioneering Women Archivists

On Monday 1 December in the Institute of Advanced Studies (UCL main building) from 6pm, Elizabeth Shepherd will launch her new book about pioneering women archivists. Pioneering Women Archivists tells the story of four remarkable women who laid the foundations of English local archives in the early twentieth century: Ethel Stokes, Lilian Redstone, Catherine Jamison and Joan Wake. The book analyses their professional historical work, alongside their educational, social and family contexts, to reveal their place in the history of the archival profession. Pioneering Women Archivists in Early 20th Century England was published by Routledge in June 2025.

At the event, Elizabeth Shepherd will speak about her research and present the main themes of the book, Dr Lucy Brownson (UCL Department of Information Studies) will speak about her research into women, archival practices, and knowledge work at Chatsworth. The presentations will be followed by a panel discussion and questions involving Elizabeth and Lucy, chaired by Dr Mari Takayanagi (historian, archivist and author).

Please book following the link: https://www. ucl.ac.uk/institute-of-advanced-studies/ events/2025/dec/ias-book-launch-pioneeringwomen-archivists

Participate in World Digital Preservation Day on Thursday 6 November 2025!

World Digital Preservation Day is a great opportunity to raise global awareness for digital preservation and to connect the digital preservation community. Every year, on the first Thursday of November, all things digital preservation are celebrated.

Through this year’s theme Why Preserve? digital preservation practitioners have been invited to reflect on their organisation’s motivations for preserving their unique digital collections, to share their stories, and to transform those insights into compelling advocacy messages.

Organised by the Digital Preservation Coalition (DPC) and supported by digital preservation networks around the globe, World Digital Preservation Day is open to participation from anyone interested in securing the digital legacyacross all sectors and geographic locations. Join the DPC in a whole day dedicated to discovering digital preservation stories and answers to the question Why Preserve? and share your unique motivations through blog posts, social media posts, events and creative activities!

Whether you organise a webinar, highlight the collections you are working on using social media, bake (or crochet!) a digital asset or get together with your colleagues to tell them more about digital preservation: there are so many ways to participate in #WDPD2025!

Visit the DPC’s webpages to be inspired by some of the awesome singer-songwriter actions from previous years, read last year’s amazing blogs, and make sure to use the WDPD 2025 logo in your own language to promote your event.

There is so much happening on World Digital Preservation Day, and the DPC can’t wait to hear what you are organising on 6 November! There’s a special page on the DPC website with all the events happening around the world If you’d like your event to be listed on the DPC website too, please contact <angela.puggioni@ dpconline.org>.

World Digital Preservation Day is just one of the ways the DPC helps to raise awareness of the strategic, cultural, and technological issues which make up the digital preservation challenge. The DPC also supports members through other advocacy activities, workforce development, and partnerships, helping members to deliver resilient long-term access to digital content and services and derive enduring value from their digital collections.

For all the latest updates, visit the World Digital Preservation Day page on the DPC website, follow the hashtag #WDPD2025 on social media or contact <angela.puggioni@dpconline.org> for more details.

Latest offerings from our Training Partners

DPS & BJM IG and Data Privacy Training

DPS & BJM IG and Data Privacy Training

DPS & BJM IG and Data Privacy Training

UK GDPR/GDPR, Data Protection and the Common Law of Confidentiality for All Staff

Essential General Data Protection Training

Advanced Data Protection Training for IG Leads

Training delivery can be virtual or in house. From a group of 4 and can be recorded

Training delivery can be virtual or in house. From a group of 4 and can be recorded

Training delivery can be virtual or in house. From a group of 4 and can be recorded

Tkm Consulting

Tkm Consulting

Developing Your Role as a Senior Information Risk Owner (SIRO)

BCS Practitioner Certificate in Scottish Public Sector Records Management

Conducting Data Protection Impact Assessments

Tkm Consulting

Tkm Consulting

Naomi Korn Associates

Naomi Korn Associates

Naomi Korn Associates

Naomi Korn Associates HCUK

BCS Practitioner Certificate in Data Protection

Tkm Diploma in Managing Data Protection Compliance

Privacy by Design: Data Protection Impact Assessments (DPIAs)

Information Security and Data Breach Management

Data Protection Essentials

Data Protection Rights (Focused on Data Subject Access Requests)

For full details of all the courses, including course description and cost, please visit: <https://irms.org.uk/page/ThirdPartyTrainingProviderCourses> And don’t forget to take advantage of your fantastic IRMS member discount.

Subject Area

Data Protection Data Protection

Data Protection/ Information Architecture/ Information Law (Governance)/ Records Management

Data Protection/ Information Law (Governance)/ Records Management

Data Protection/ Information Law (Governance)

Data Protection/ Information Law (Governance)

Information Architecture Information Law (Governance)

Information Law (Governance)

Information Law (Governance)

Naomi Korn Associates

Naomi Korn Associates Freevacy Freevacy Freevacy Freevacy

An Overview of The Freedom of Information Act and Environmental Information Regulations

Information Security and Data Breach Management

Certified Information Privacy Technologist

Certified Information Privacy Manager

Certified Information Privacy Professional Europe (CIPP/E)

Certified Information Privacy Professional Europe (CIPP/US)

DPS & BJM IG and Data Privacy Training HCUK HCUK HCUK

Essential Cyber Security Training

Working Together: Combined Course for Senior information Risk Owners, Caldicott Guardians and Data Protection Officers: Half day

Responding to Subject Access Requests for Health & Social Care

Data Protection Officer in Health and Social Care: What Good Looks Like

online

Live online, instructor-led training & in-company classroom training option for groups of 6 or more

Live online, instructor-led training & in-company classroom training option for groups of 6 or more

Live online, instructor-led training & in-company classroom training option for groups of 6 or more

Live online, instructor-led training & in-company classroom training option for groups of 6 or more

Training delivery can be virtual or in house. From a group of 4 and can be recorded

3 and a half hours/ 1 half day

3 and a half hours/ 1 half day

4 x 4 hour online sessions, or 2 days onsite + unlimited 1-2-1 coaching and exam preparation

4 x 4 hour online sessions, or 2 days onsite + unlimited 1-2-1 coaching and exam preparation

4 x 4 hour online sessions, or 2 days onsite + unlimited 1-2-1 coaching and exam preparation

4 x 4 hour online sessions, or 2 days onsite + unlimited 1-2-1 coaching and exam preparation

Subject Area

Information Law (Governance)

Records Management

Records Management

diary

IRMS Events

IRMS Legal roundtable discussion

13 November 2025

Time: 11:00–12:00

IRMS London Group – Monthly drinks and networking event 20 November 2025

Location: London Time: 18:00–21:00

IRMS Scotland Group – Records management symposium 3 December 2025

Location: Edinburgh

Dates

IRMS London Group – Monthly drinks and networking event 18 December 2025

Location: London Time: 18:00–21:00

IRMS Conference 2026 17–19 May 2026

Location: Celtic Manor Resort, Newport

new members November 2025

There have been 378 new members since January 2025 individual members

Samantha Carter

Marcus Stewart student/apprentice members

Johnny Costello

Ross Duncan Boyle

Meredith J. Batt

Mary More

Christal Glover

Kiira Jeffers

Alekya dakarapu

Paula Melo

Derick Ofetotse

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.