YetToDiscover Autumn 2024

Page 1


YET TO DISCOVER

THE NEW MODERN FRONTIER IN LEARNING DATA

Pioneering advances through Learning Engineering.

Why is synthetic data taking over the world? And why is that likely a good thing? Democratizing

Sometimes you just want your data. So why is it so hard to get? What to do when your AI won’t talk to your 12 business systems: a quick use case

We look at a quick case study connecting an AI Tutor to a competency assertion system.

Products and services designed to support the data demands and goals of Learning Engineering.

The Learning Data Management Report and Catalog is published by Yet Analytics, Inc. 8600 Foundry Street, 217E Spinning Building, Savage, MD 20763 USA. © 2024. Yet Analytics. Reproduction without permission is prohibited. SHELLY BLAKE-PLOCK, EDITORIAL DIRECTOR.

Because when it all comes down to the brass tacks, you want your data and you want it your way.

Peruse products and services purpose-built and designed to support the data instrumentation and integration needs of Learning Engineers.

That Data Isn’t Real!

Synthetic xAPI comes of age in use cases that expand the ability to innovate.

14

Pioneering advances through Learning Engineering

How engineering methodologies are transforming learning activities into transformational data-rich learning experiences.

EDITORIAL

I recently had the opportunity to attend this summer’s annual ICICLE conference (in Phoenix hosted by the ASU Learning Engineering Institute – hot!) The aptly named ICICLE – the International Consortium for Innovation and Collaboration in Learning Engineering – is a volunteer organization that started up in 2017 at the IEEE.

While panels, talks, and workshops covered everything from modeling complexity to integrating GenAI into content creation, there was something else going on: a sense that the maturation of Learning Engineering is both inevitable and… in an way… sort of terrifying.

On the one hand, developments in AI are quickly altering expectations and threatening traditional roles in learning design. On the other hand, whereas in the past learning orgs often had to justify their existence (especially within corporate environments), now learning and training operations –bolstered by the same explosion of AI and analytics technologies – are being recognized as voluminous deposits of that new oil: data. This culture shift is forcing a recalibration of what it means to work in the learning space.

From xAPI to synthetic data to XR to advanced instrumentation and architectures, things are... different.

We’d love to hear about the shifts that you are seeing in your workplaces. Is the time right for Learning Engineering?

– Shelly Blake-Plock | September 2024

THAT DATA ISN’T REAL!

The truth is, none of these new capabilities -- from AI to automated analytics -- will work without synthetic data.

This article discuss the DATASIM project which was sponsored recently by the Air Force Research Laboratory in an effort to create synthetic xAPI data about pilot training. Initial development of DATASIM was sponsored by the Advanced Distributed Learning Initiative. The AFRL project was presented in a poster session at the 2024 Emerging Technologies for Defense Conference and Exhibition (Emerging Technologies Institute / NDIA).

Millions of lines of code are streaming across my screen. The little command line interface looks like it learned a thing or two watching The Matrix. But everything is working exactly as planned. I’m testing a new simulation specification. And what am I simulating? Hundreds of thousands of learning experiences -- the outcomes of which are entirely rendered as synthetic data.

Why synthetic data? Because it is key to innovation. (We know… that seems strange). But here’s the deal: when building complex AI and analytics applications, you need data. But more often than not, either you don’t have access to the data you need or the data you need is wrapped up in red tape (think privacy and data security policies).

Enter synthetic data.

By generating synthetic data, we can build a stockpile of the data we need.

DATASIM (aka The Data and Training Analytics Simulated Input Modeler) is an open source project dedicated to generating synthetic xAPI data. The following two use cases demonstrate a bit about why synthetic xAPI is valuable..

Using Synthetic xAPI to Predict Cost

As organizations increasingly adopt cloud-based xAPI implementations to track and analyze learning experiences, understanding the associated costs becomes crucial. One effective approach to predict these costs is through data simulation.

Cloud-based xAPI implementations involve various cost factors, including data storage, processing power, and data transfer. Throw AI and novel analytics into the mix and things get

Is the line being blurred between what is real and what is synthetic? And to what degree can synthetic data be used to empower real innovations? These are the questions being asked by researchers in AI and novel analytics.

weirder. Traditional methods of cost estimation often rely on historical data or rough calculations, which can lead to inaccurate budgeting. The challenge is exacerbated in the case of xAPI because it is so difficult to estimate “just how much” xAPI data your implementation will be producing. This is where data simulation comes into play.

DATASIM, is an Apache 2.0 opensource data generator designed to model and produce synthetic xAPI data. This powerful tool allows organizations to simulate different scenarios and workloads, providing valuable insights into the potential costs of their xAPI implementation.

The benefits are clear both to technical experts and to business stakeholders. By simulating xAPI data modeled on the expected output of a specific use case, organizations can predict costs more accurately, avoiding

unexpected expenses and ensuring efficient budget allocation.

Additionally, data simulation helps in testing the scalability of the cloud infrastructure, identifying potential bottlenecks, and optimizing resource allocation. Predicting costs through simulation allows organizations to assess financial risks and develop strategies to mitigate them.

With detailed cost estimates, decision-makers can choose the most cost-effective cloud solutions and configurations for their xAPI implementation.

Using data simulation to predict the cost of running a cloud-based xAPI implementation is a strategic approach that offers numerous benefits. Tools like DATASIM provide organizations with the ability to accurately estimate costs, test scalability, and make informed decisions. As a result, organizations can optimize their xAPI implementations, ensuring they are both cost-effective and capable of meeting their learning analytics needs.

Why this matters? Because xAPI tracks activity, different learning workflows will produce vastly different amounts of data for automated analytics. For example, tracking the taking of a quiz in an LMS is completely different than tracking the complex activities that occur within an immersive XR training environment. And the costs of the xAPI implementations of each are therefore completely different.

By adjusting parameters in the simulations that DATASIM runs, organizations can simulate different data volumes, types of learning activities, and user interactions. This granular control allows for far more precise cost prediction and resource planning.

Better planning, better results.

This isn’t your AI’s synthetic data. This is something different... something designed to ensure that humans can always have the transparent access to data and machine decisions that they need in order to understand what is going on. And DATASIM’s builtin flexibility means that it can work in almost any learning scenario.

It is the ability to generate data that is modeled after any variety of human activity or workflow that makes DATASIM so powerful. It’s also what led us to the Air Force Research Laboratory where we applied the principles of DATASIM to the domain of pilot train-

ing. What follows is the account of that project that was delivered at the 2024 Emerging Technologies for Defense conference.

While the domain of pilot training could benefit from the development of novel analytics and artificial intelligence innovations, much of the data required to feed such applications are restrained by security and privacy requirements, especially in the DoD.

But what if the data were synthetically generated? GenAI is an option. Though AI-driven synthetic data is often... problematic. (And near impossible to troubleshoot).

DATASIM represents a different approach -- an approach where all of the patterns and sequences of synthetic activity are modeled in an xAPI Profile -- a semantic data profile that can be designed by a human, run by a machine, and which is defined by open global standards.

The profile is fed into DATASIM where the researcher chooses parameters such as the number of learners being observed, the duration of the training session, and whether the activity should be weighted for increased stress and difficulty. The profile and the variable modifications are saved as a simulation specification and are reusable. So, unlike black box GenAI solutions, the synthetic behavior here is transparent, explainable, and able to be audited and modified either by humans or machines.

While attempts have been made to model airplane behavior and pilot activities in the cockpit, relatively little attention has been paid to modeling the pilot instructors themselves. In working with the Air Force Research Laboratory, we chose to create data profiles representing the act of evalu-

ating pilot trainees. To do this, we created a digital representation of a pilot evaluation form and then modeled the behaviors and activities of pilot instructors during an evaluation. This included not only how they score pilots on different items, but things like how long it takes them to complete portions of the assessment, in what order they observe things happening, and when they choose to change initial observations based on later observations.

The key is to simulate the instrumentation of the “digital interface points” and “human observation points” that will allow for a high-fidelity simulation to occur. As all of the mechanics are available in a machine-readable format, there is no need for the simulation to occur in real-time -- but ideally, the synthetic data output from DATASIM is the same as if the simulated activity occurred in the real world.

We evaluated the technical conformance of the synthetic data to the xAPI data standard and found it to be 100% conformant at four levels of scale from one hundred data statements created to one million data statements created. The synthetic data generation occurs over an amount of time that we would consider viable for ad hoc use — less than five minutes per million statements..

We then evaluated the synthetic JSON code against a manually created baseline. DATASIM took advantage of optional fields and decisions available as xAPI data attributes (such as choosing different, but conformant ways to represent the identity of the pilot trainee or the instructor or by automating the creation of new session and registration IDs). Overall, DATASIM produced deterministic results entirely consistent with the concepts and patterns -and temporal conditions -- coded into the xAPI Profile representing patterns

Sometimes the machine trains the person, and sometimes the person trains the machine. The method of synthetic data generation represented in DATASIM is based on an idea of human-computer synthesis -- each enabling capabilities in the other. To learn more, visit https://github.com/yetanalytics/datasim

of instructor behavior.

Finally we evaluated the semantic intelligibility of the results. While we designed a method for human evaluation, we found it to be too unwieldy to use at scale. So we’ve been experimenting with feeding resultant synthetic xAPI into a GPT model and having it produce a human readable narrative based on the data. This is where it actually makes sense to leverage GenAI. Currently, the GPT is able to turn our synthetic JSON code into a human readable narrative of basic consistency. And it is beginning to be able to identify gaps and inconsistencies in narrative flow -- such as if actions occur out-of-order or something breaks the logical flow of events.

The next step is to explore other training use cases. In the end, it might be that every training event across the enterprise is represented in a data profile that can be used to produce synthetic data.

DATASIM represents a different approach -an approach where all of the patterns and sequences of synthetic activity are modeled in an xAPI Profile.

DEMOCRATIZING DATA DESIGN

We know. You just want your data. Why is it so hard to get your own data? What, in the name of all that is good, will it take to just get consistent access to good data?

The Experience API (xAPI) has transformed the landscape of learning technology, enabling the capture of diverse learning activities across various platforms. xAPI Profiles can ensure that your xAPI data is both accessible and of high quality. However, designing xAPI Profiles has traditionally been a cumbersome and labor-intensive process.

Centriph is designed with the principles of Learning Engineering in mind.

This complexity often deters organizations from fully leveraging the potential of xAPI, hindering the optimization of learning and development programs.

xAPI Profiles provide a structured framework for defining the rules, vocabulary, and patterns used in xAPI statements. These profiles ensure consistency and interoperability across different systems, enabling detailed tracking and analysis of learning activities. However, creating these profiles requires a deep understanding of xAPI specifications, meticulous attention to detail, and significant time investment.

Centriph, a software platform devel-

Whereas xAPI describes discrete activities, xAPI Profiles describe the combinations of activities that make up a learning experience. You can think of an xAPI Profile as a way to create a description of a learning experience which a machine will be able to understand.

oped by Yet Analytics, addresses these challenges by offering a streamlined approach to xAPI Profile design. This innovative solution simplifies the authoring process, making it accessible to both technical implementers and business users.

Centriph’s intuitive design allows users to engage with xAPI Profiles without requiring extensive technical knowledge. This democratizes the profile creation process, enabling more stakeholders to contribute to the development of learning analytics. And for highly technical users, Centriph offers all of the “under the hood” features and accessibility to data and code that a developer could want.

One of Centriph’s standout innovations is its built-in real-time data validation and error handling. As users design their profiles, Centriph automatically checks for errors and provides guided instructions to resolve issues. This feature not only reduces the likelihood of errors but also significantly decreases the time required to create robust xAPI Profiles.

By streamlining the profile design process, Centriph enables organizations to rapidly develop and deploy powerful data profiles. This efficiency supports timely decision-making and enhances the overall agility of learning and development programs.

Save time, reuse what’s proven to work, be assured of data compliance, and drastically reduce technical debt.

Centriph is designed with the principles of Learning Engineering in mind, supporting the data instrumentation goals of this emerging field. By facilitating the accurate and efficient capture of learning data, Centriph helps organizations to develop evidence-based strategies for improving learning outcomes.

For enterprise customers, Yet Analytics offers the deployment of Centriph to their cloud environment. This ensures that data privacy and security are maintained while fitting seamlessly into existing IT infrastructure. By integrating with current systems, Centriph provides a secure and scalable solution for designing data models and managing learning data.

The adoption of Centriph can have a transformative impact on how organizations manage and utilize their learning data. With streamlined profile design, organizations can ensure con-

sistent data collection, enabling more precise and meaningful analysis. This leads to better insights into learner behaviors and outcomes, informing targeted interventions and personalized learning experiences. Moreover, the enhanced efficiency and accuracy offered by Centriph reduce the technical debt associated with traditional xAPI Profile design methods. Organizations can allocate resources more effectively, focusing on strategic initiatives rather than troubleshooting and error correction.

Centriph represents a significant advancement in the field of learning data management. By simplifying the design of xAPI Profiles, providing real-time validation, and supporting Learning Engineering goals, Centriph empowers organizations to fully harness the potential of their learning data.

Learn more at https://beta.centriph. com/

Centriph is available in beta. Be one of the first to understand the power of real-time xAPI data validation and guided profile development.

WHAT TO DO WHEN YOUR AI WON’T TALK TO YOUR BUSINESS SYSTEMS: A QUICK USE CASE

The STEEL-R team at the US Army Simulation and Training Technology Center needed a way to link the observations made by an AI tutor to the assertions made by a competency system. But AIs don’t always play nice with business systems.

Our solution was to instrument the AI tutor to deliver xAPI data and then send the data through a series of filters governed by xAPI Profiles. The end result of the filtered data would be the evidence that the competency system required in order to make assertions.

Now, the STTC team has the ability to assert competency attainment in real-time based on the observations made by the AI tutor. And — an added advantage — is that because it’s all xAPI data, STTC can switch out different scenarios and the data remains relevant and interoperable.

PIONEERING ADVANCES THROUGH LEARNING ENGINEERING.

In a hotel ballroom shielded from the oppressive heat of an Arizona summer, three hundred learning leaders from across the ranks of government, industry, and academia listened as pioneering leader Avron Barr described the origins of a small, but vital, organization of volunteers called ICICLE.

IEEE’s International Consortium for Innovation and Collaboration in Learning Engineering (ICICLE) defines “Learning Engineering” as “a process and practice that applies the learning sciences using human-centered design and engineering methodologies and data-informed decision making to support learners and their development.”

Established in late 2017, the volunteer consortium brings together the brightest minds in learning sciences, technology, design, and engineering for an annual conference. Past conferences have taken place at MIT, George Mason University, and Carnegie Mellon.

The 2024 edition was co-sponsored by the Arizona State University.

Learning Engineering is an interdisciplinary field that applies engineering principles to the design, development, and implementation of learning experiences. It involves integrating theories from education, psychology, and cognitive science with cutting-edge technologies and approaches to data design and implementation to create effective, efficient, and engaging learning environments capable of producing measurable outcomes.

Using data standards, cutting edge software and data architectures, and analytics to inform the creation and refinement of learning materials, experiences, and strategies, Learning Engineering values iterative development -- including continuously testing and improving learning solutions through iterative cycles.

Learning Engineering is an interdisciplinary approach combining

insights from various fields to develop comprehensive learning solutions. Teams pay attention to scalability and adaptability -- creating learning experiences, and the technologies and data infrastructure underpinning them, that can be securely scaled and adapted to diverse learner needs.

By leveraging data and technology, Learning Engineering improves the effectiveness of educational programs, ensuring better learner engagement and learning outcomes.

Data-driven insights from Learning Engineering can guide educational policies and instructional practices, leading to more informed decision-making. As the demand for continuous learning grows, Learning Engineering provides the tools and methodologies needed to support lifelong education.

Learning Engineering represents a transformative approach to scalable, data-rich, and informed education.

For the last year, we’ve been working with our partners in government on a project that is revolutionizing the modernization of the delivery of data technologies in the support of enterprise learning and training initiatives.

And the kinds of problems that we are working to solve are the same problems faced by learning and training organizations across industry and government:

• Maintaining a secure environment populated by a variety of technologies through the implementation of DevSecOps for learning ecosystems

• Choosing the right standards-based solutions that will fulfill the goals of enterprise interoperability

• Developing open source in a way that can meet our immediate needs while also fostering a community of developers both within and outside our organizations

• Making the link between instructional design, learning experiences, and meaningful data about the formation and outcomes of learning

They are the kinds of problems that many of you are facing as you rightsize your learning and training systems for the new realities of increased expectations and decreased budgets.

An Odyssey

Our team has journeyed through what has been a ten year odyssey through the sometimes grueling, yet always exhilarating world of distributed learning. There have been successes.

We delivered the first commercial Learning Record Store to pass the ADL LRS Test Suite (back in 2015). And we demonstrated the first uses of xAPI to standardize and structure data streams of simulations and biometrics used in training. We developed the first validators for xAPI Profiles and subsequently built an entire platform

for xAPI Profile authoring.

The team has been fearless throughout. And we’ve learned that Learning Engineering is not just a way to approach problems, it’s a mindset.

But we’ve had our own Learning Engineering challenges, including coming to terms with the fact that our concept for automated learning analytics was unscalable; having to redesign our entire core platform capabilities from the ground up to meet the changing needs of our customer base; and meeting the demands and requirements of some of the most complex security requirements on the planet in an effort to bring learning ecosystem modernization to the complex training organizations who have the most at stake.

Fortunately, the challenges have only increased our resilience and made us focus on producing better products.

Choosing to Build

You can spend all of your effort on studies and debates. And while these can be worthy if used judiciously, we all know that there are situations where it feels like we’re just debating the finer points as a way to kick the can down the road.

We’ve learned that when taking on the challenge of modernization – especially in a field as critical as learning and training (imagine getting on an airplane with an untrained pilot or letting a surgeon who never learned a procedure to try it out on you) – you have to be prepared to make decisions. Not all of those decisions will be comfortable. But it is better for everyone that they are made.

Doing your own Learning Engineering project and building a tech stack to support learning and training? Be sure to stop by and check out our open source resources: https://github. com/yetanalytics

We’ve learned that Learning Engineering is not just a way to approach problems, it’s a mindset.

CATALOG OF PRODUCTS AND SERVICES

SQL LRS

SQL LRS is a fully-conformant xAPI Learning Record Store. It conforms to IEEE 9274.1.1 and passess the ADL LRS Test Suite.

SQL LRS launches with Postgres or SQLite and supports plug-and-play integration with BI via the SQL connectors found in all major BI platforms. So, rather than learn a new dashboarding and reporting system, you can use the

exising reporting tools and workflows that you already have. Best of all, SQL LRS supports SQL queries -- meaning you don’t have to learn new query languages to use the product.

SQL LRS is distributed as Apache 2.0 open source on GitHub and a containerized version is available on Docker.

A hardened version is available for US federal government customers via a Continuous Authority to Operate (cATO) in ADL’s EDLM learning enclave on Platform One. SQL LRS has attained Certification to Field in IL2 and IL4 environments.

SQL LRS has some unique features including:

SQL LRS is the most installed Apache 2.0 Learning Record Store in the world.

• offline capability

• native support for LRSPipe

• Reactions (i.e. conditional logic)

LRSPipe is a filter-forwarder that can be governed by xAPI Profiles. Use it either to move all of your data from one LRS to another, or configure it to read the patterns in a data profile -- so that it only forwards the information defined in the pattern.

Reactions is a conditional logic machine built into SQL LRS. Program it to assert new xAPI statements based on the statements that the LRS is receiving. For example, based on activity coming into the LRS, you could assert that a learner completed a multi-phase course or attained a proficiency.

SQL LRS is widely installed and it is routinely updated. As of Aug 2024, there were 77 software update releases. And SQL LRS has been pulled on Docker over 10,000 times.

While you are free to grab SQL LRS off of GitHub and install it yourself, most of our enterprise customers take advantage of our expert installs and extended support packages.

Reach out to the team@yetanalytics. com to discuss what’s right for you.

DATASIM

What it is: An open source data modeler and generator for xAPI.

What it does: Generates synthetic xAPI data.

Why it matters: Synthetic data is essential to innovation.

DATASIM is distributed as open source under the Apache 2.0 license. Yet Analytics is available to help you install and utilize DATASIM. We also provide data design services for complex projects. Get in touch at team@ yetanalytics.com and see how we can augment your team.

About

DATASIM provides the ability to model and generate synthetic xAPI data. It is an essential tool for Learning Engineers.

Users can define parameters such as actor demographics, activity types, verbs, and time distributions to tailor the simulation to produce synthetic data aligned to specific use cases, such as training scenarios or user behavior patterns.

DATASIM utilizes xAPI Profiles to ensure that generated data adheres to specific data models, making the simulation data highly relevant and structured according to learning

domain needs. And DATASIM features automated data validation -- ensuring that the xAPI Profiles leveraged (and the data that is generated according to them) is conformant with standards.

DATASIM can simulate data at various scales, from small datasets to billions of xAPI statements, enabling stress testing and benchmarking of LRSs and data pipelines. And when launched as a REST API webapp, it can be incorporated into automated CI/CD pipelines or other systems needing synthetic data on-demand.

Most importantly, DATASIM supports time-based simulation -- enabling data sequences that reflect realistic learning timelines and user progression.

CENTRIPH

About

Centriph is an authoring platform for xAPI Profiles. It allows users to create, edit, and publish xAPI profiles -- facilitating standardized data tracking for learning and training activities.

Featuring industry standard built-in profile validation technology, Centriph ensures that xAPI Profiles authored through its interface meet xAPI specification standards and xAPI Profile specifications -- reducing errors in implementation. Centriph will not allow a user to publish an xAPI Profile that is non-conformant to the spec.

Centriph also supports versioning of profiles -- enabling users to manage updates and maintain consistency. And in enterprise deployments, profiles can be shared privately or publicly and with specific team members.

Centriph outputs xAPI Profiles in JSON-LD format, which is compatible

with various data ecosystems and supports linked data principles. Centriph provides the ability to query data profiles directly using SPARQL -- the standard query language and protocol for Linked Open Data on the web or for RDF triplestores

The Centriph platform supports collaboration among teams, making it easier to co-author and refine profiles collectively. Profiles can be shared, remixed, and adapted to new use cases -- or can be used as templates.

Users can tailor profiles with specific verbs, activity types, and extensions to fit unique use cases. Centriph supports templates, patterns, and concepts -- and includes a synthetic data generator. And Centriph will alert users of errors in real-time while they are authoring profiles.

Centriph is available as SaaS and is also deployable to enterprises. Contact us for info at team@yetanalytics.com

What it is: An xAPI Profile authoring platform.

What it does: Ensures the design of valid standards-based data.

Why it matters: Invalid data drives cost up and breaks things.

How to get it: Centriph is currently in a public beta and is available at: https://beta. centriph.com/ for enterprise deployments, contact us at team@yetanalytics. com

LRSPIPE LEARNING ENGINEERING AND DATA SERVICES

Early in our company’s history, we made a decision to eschew most of the ordinary marketing channels and instead let our research speak for itself. All these years later, that research speaks volumes and includes foundational work in xAPI data design, streaming data architectures for learning and training, AI-enablement, and advanced data synthesis. Paramount to everything was our implementation of Learning Engineering strategies going back to our earliest days (in fact, we helped start the ICICLE consortium at IEEE back in late 2017).

In 2015, we were awarded the Nielsen Data Visionary Award at TechCrunch Disrupt in San Francisco. Approaching ten years since that milestone, we are continuing to demonstrate visionary leadership in the space with industry-defining contributions from all of our team members.

We look forward to bringing this energy and experience to your projects.

We provide omprehensive services to assist orgs in the implementation and management of xAPI-compliant systems including everything from initial consulting to ongoing support and training. Focusing on the design and implementation of scalable data architectures, we tailor approaches to the needs of individual learning and development programs. Let us help you to instrument and integrate quality data across your learning ecosystem.

There are many xAPI implementations that have been invested in over the last few years that for one reason or another are now in need of an update.

Common reasons include:

• An open source LRS offering is no longer being supported

• There’s a change in the funding available to support LRSs

• A shift in the enterprise data strategy to LRS federation means needing to find better and more flexible licensing terms

But what do you do about all of that data that you’ve collected? Migrating xAPI can be tricky. And, generally speaking, data migration is always expensive (and a hassle).

Fear not, we’ve got you covered.

We’ve built a software capability called LRSPipe which can be used to forward xAPI from one LRS to another — no matter who the LRS vendors are.

So now, you can migrate your data and change vendors far more easily.

Additionally, the filtering in LRSPipe can be governed by the patterns in xAPI Profiles. So, if you have downstream systems (like a competency assertion engine) that requires certain data, you can use LRSPipe to funnel exclusively that data that is needed.

You can find LRSPipe on our Github page — it’s open source and distributed under the Apache 2.0 license.

And if you need help either using LRSPipe or in designing xAPI Profiles to use with it, feel free to reach out. We do this stuff everyday and would be happy to chat.

Contact: team@yetanalytics.com

ALIGN YOUR LEARNING DATA WITH YOUR BUSINESS MODELS

“EXPERIENCE” A NEW PUBLISHER FOR UNITY

Interested in tracking learning and engagement activity from Unity as xAPI?

We recently wrapped up a project that offered teams building a dozen different Unity-based extended reality (XR) applications the ability to have engagement data from each application populate a mission-control style engagement dashboard in real time. So rather than one team / one dashboard, the goal here was all teams / one dashboard.

In order to ensure that each XR application’s activity data was sent in a common format that could easily be visualized, we wrote a simple but effective xAPI plug-in for Unity. Now, there are other such plug-ins out there, but this was our first attempt. And though it is very simple, we also

designed it to be extendable. So, if you want to add to the tracking capabilities, you’ve now got a template that will work for most use cases.

Imagine a scene in which you’d like to track which learning activities a learner engages with inside of an XR scenario -- and you’d like to track in which order they engaged with different objects, whether they went back to certain objects before or after completing a task, or even how long

One of the most powerful use cases for LRSPipe is in using your learning data to power your business systems.

Take for instance a readiness platform. With one or more instantiations of LRSPipe, you could draw data from numerous learning and training sources, filter it to meet the evidentiary needs of your readiness profile, and forward the data to what we call a “transactional LRS” where it’s ready to be consumed by your business systems.

This can result in increased automation as well as the creation of an auditable trail of data (because xAPI data is immutable).

From ensuring that learners have covered pre-requisites to asserting competencies, the value is endless.

the learner engaged with each object each time they engaged with it. Data instrumentation with xAPI makes this possible.

And, because xAPI is interoperable, you would have the opportunity to combine streams of activity from XR sessions with data tracked in an LMS or records of video access in an LXP. The result could be a more holistic understanding of the behavior of the learner -- including their preferences, their responses to stress and difficulty, and their ability to implement what they learned in “the real world” with what they’ve been tasked to do in virtual, augmented, or mixed reality.

We’d love you to try it out… with the understanding that it’s a work-in-progress. In fact, by trying it out, you can help us to make it better for everyone — that’s why we’re distributing it as open source under an Apache 2.0 license.

See https://github.com/yetanalytics/ unity-xapi-publisher for more information. And feel free to ask questions or leave issues in our repo.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.