Page 1



Turning negative results into positive change

3 Reasons why compliance training fails

Collecting relevant assessment evidence, are you ticking the right box?

PAGE 6-7

PAGE 4-5




was delivering a session about advanced assessment techniques last week, and while addressing different evidence collection methods, a participants asked me the question: “do we get extra points during a regulatory audit if we perform better than the minimum requirement? If not, why are you suggesting to go the extra mile, while other RTOs pass an audit doing the minimum?” I understood that the conversation quickly shifted from quality to compliance, and my responded was “how many quality RTOs were closed by ASQA during the last 2 years?” The answer was “... none”. What distinguishes successful RTOs from the rest? Is it their market savvy, technological superiority, marketing, or their capable people? These training organisations survive on their culture, essentially a manifestation of their corporate values. It is these core values that guided them to the right decisions for long-term growth and life. Committing to sustainability is also a corporate value. Successful RTOs: l understand quality and industry relevant training l identified their core values early on l implemented them and audited them regularly l adhered to them through good times and bad l changed their strategies, but never their values. Here’s the difference between successful RTOs and RTOs that will probably not make it through the registration period: A successful RTO’s values are set by its founders, passed down the ranks, and nurtured and sustained. Its unsuccessful counterparts make up a set of values that change with each new owners/managers, and ends up as a value statement on the company’s website that is not translated into practice within the organisation. How Do You Sustain Corporate Values? Managers at all levels must make the company’s core values a part of their everyday lives—not because they have to, but because they believe in them. As a result, they will be seen as role models who will encourage employees to internalize these values. The core values must be passed down from management to trainers and other employees through internal communications. Unfortunately, many RTOs contract very capable managers and trainers, but they don’t achieve quality outcomes because lack of CEOs leadership and guidance. Establishing, promoting and monitoring the use of core values can help you to succeed.


002 NOVEMBER 2017



Can VET match micro-learning solutions? PAGE 16

Breaking bad design habits

10 things every learner needs to know


PAGE 18-19

Is VET trapped in the capabilities vs performance issues

Trainers upgrade A cost or a solution


PAGE 18-19

Using games in training

PAGE 12-13



Don’t tick the wrong box A

The quality, quantity of the evidence collected must support the assessor’s judgement. In general terms, learning in the vocational education and training spectrum means a consistent change in the candidate’s attitudes. In other words, the candidate is able to use the new skills and knowledge consistently, and apply them in different contexts and situations. 4 InVETMAGAZINE

ssessment systems continue to be the most challenging area in an RTO’s operations and yet the most critical to demonstrate quality outcomes. When dealing with assessments in Australia’s VET environment, we need to consider both the assessment system used by the training organisation, and the outcomes produced by that system. The “assessment evidence” is collected and used to make a competency judgement against the unit(s) of competency. I would like to use this article to reflect about the “assessment evidence”, and particularly to assessment evidence used to support decisions around completed tasks, and the demonstration of skills. Quite often in my work as an auditor, I see “Observation checklists” based on tick boxes next to text copied/pasted from the unit of competency’s performance criteria. Assessment activities used to produce evidence of a candidate’s skills, will always require a task to be completed, under the conditions and standards (relevant the unit of competency element and performance criteria), and will provide candidates an opportunity to demonstrate the skills required to perform the mentioned task. Knowing is not the same as doing, and VET is about doing. That is a fundamental principle for the design of the assessment, but as I mentioned above, I will focus here on the evidence produced, and not so much on the task itself. Do we have rules to accept assessment evidence in Australia’s VET sector? Yes, the rules of evidence are: Valid, Authentic, Sufficient and Current, and these rules must guide assessors during the collec-

tion of evidence. Ok, let’s start with Validity. What is considered as “Valid” evidence? According to the Standards for RTOs, evidence used to make a competency judgement must confirm “...that the learner has the skills, knowledge and attributes as described in the module or unit of competency and associated assessment requirements.” In other words, the assessment evidence collected confirms the candidate’s ability (performance evidence and knowledge evidence) to achieve each outcome (Element) described in the unit of competency, under each condition/standard (Performance Criteria). How can we prove that an outcome has been achieved? The evidence must provide details about: what was achieved, when it was achieved, in which context. A tick in a box will not provide that information. Some assessors think they can just tick candidates off as competent, based on their “professional judgement”. And on occasions they felt insulted when evidence, used to make the judgement, was requested. Quite often, I hear… I used my criteria from 20 years of working experience. To be very clear here, I am not questioning an assessor’s industry experience. I celebrate that. But competency-based assessment is an evidence-based system. In other words, judgement is made based on evidence collected. The quality, quantity of the evidence collected must support the assessor’s judgement. In general terms, learning in the vocational education and training spectrum means a consistent change in the candidate’s attitudes. In other words, the candidate is able to use the new skills and knowledge consistently, and apply them in different contexts

and situations.. If a product, or sub-product is produced, the product itself will constitute valid evidence that the assessor can then assess against a benchmark (the unit). Assessing products requires comparing the product’s characteristics, features and use, to the outcomes described in the relevant Element/ PC from the unit. Assessors can use records of the product’s characteristics, for example, if the product is an object you can have details of physical characteristics (length, size, weight, height, resistance, conductivity, etc.), or if the product is something more intangible such as a plan, some characteristics that can be recorded and assessed could include content, relevance of information provided, usability, veracity of instructions, feasibility of projections/ forecasts. If the task is a service (delivered to internal or external clients), records of the service provided will constitute valid evidence. For example, if the service is to resolve a customer complaint, evidence could include records of the complaint resolution, feedback from the client, photos, videos and records of the observation of the candidate dealing with the client (details of the protocol/procedures followed, techniques used, skills demonstrated, etc.). The quality, quantity and relevance of the evidence collected must support the assessor’s judgement. In general terms, learning in the vocational education and training spectrum means a consistent change in the candidate’s attitudes. In other words, the candidate is able to use the new skills and knowledge consistently, and apply them in different contexts and situations. The above means that evidence must demonstrate that the candida-

te had performed the task(s) more than once. In some cases, the unit of competency indicates a specific minimum number of occasions for a task to be performed. RTOs should use industry engagement activities to determine a benchmark for sufficient evidence, in line with industry standards. This is the requirement under the rule of sufficiency. The assessment evidence constitutes a legal document and as such, the authenticity of the evidence is paramount. How can we prove that the evidence presented was either, produced by the candidate, or talks about the candidate? What measures are we using to demonstrate authenticity? In VET, there are three types of evidence we can use Direct, Indirect or Supplementary. When collecting direct evidence, it is important that the identity of the candidate is confirmed and that the assessor observed or witnessed a task being completed or through oral questioning, and details of the event registered (i.e. date, time, location, duration). Tools used to produce indirect evidence, such as finished products, written assignments, tests, or portfolio of evidence from a project,

must include measures to confirm authenticity. This could include photographic or video evidence, further questioning from the assessor about the procedure(s) used to complete the task and how that procedure would be adapted if the situation/ context was different. Many RTOs use a “declaration of own work” by the candidate as well. Some assessors think they can just tick candidates off as competent, based on their “professional judgement”. And on occasions they felt insulted when evidence, used to make the judgement, was requested. This evidence is usually produced in the workplace. Measures to prove authenticity could include using referees to confirm the claims made in the third-party reports, or providing an opportunity for the assessor to visit the workplace for further observations/interviews. Finally, evidence collected must meet the rule of currency. This may be particularly challenging in an RPL assessment. Assessment evidence must prove that the candidate demonstrated the relevant skills and knowledge at the time that the competency judgement was made, or in the very recent past to the judge-

ment. What constitutes a “very recent past”? In some cases, the unit of competency provides information about currency, if no information is provided in the unit, RTOs should use industry engagement activities to establish a criterion for currency, in line with industry standards. In general terms, any evidence collected, or about something that happened more than two years prior to the assessment, judgement is potentially not current (in some industries evidence from more than two years ago may be accepted). The bottom line here is that an assessment judgement must be made after the assessment evidence has been collected and compared against the unit requirements. The assessment evidence recorded (the facts of the case), demonstrates the learner has the skills, knowledge and attributes as described in the unit, and represents the legal proof of the competent/not yet competent judgement, that will be available for eventual procedures such as appeals, validations, audits or reviews. This evidence must meet the rules of evidence otherwise the RTO will be in breach of Clause 1.8 of the Standards for RTOs.

Some assessors think they can just tick candidates off as competent, based on their “professional judgement”. And on occasions they felt insulted when evidence, used to make the judgement, was requested.



Three Reasons Why Compliance Training Fails W e are in the training industry, yet many training programs, including some formal training programs, fail to have a positive effect on our RTO’s performance. In this article, I will analyse the top three reasons why RTO Compliance Training Fails.

Lack of Alignment with RTO’s Needs The payoff from a training program comes from the business measures that drive it. Simply put, if a training program is not aligned or connected to a business measure, no improvement can be linked to the program. Too often, training is implemented for the wrong reasons – a trend, to meet regulatory requirements, or perceived need that may not be connected to an RTO’s measure. Initial training needs may be linked to the objectives and evaluation by using a consistent four-level concept: 1) Reaction (How we want students to perceive the program and its outcomes) 2) Learn ing (What new skills and knowledge we want students to learn) 3) Application (How we want students to use the new skills) 4) Impact (What RTO performance metrics we want to change) Without the business connection at Level 4, the program will have difficulty achieving any results. One major RTO faced this problem directly as it reviewed its Trainers’ Professional Development Plan. Several PD sessions were conducted to further develop trainers’ skills and knowledge to assess stu6 InVETMAGAZINE

dents. The PD sessions were not connected to any RTO performance metric, such as number of non-compliances in clause 1.8, number of rectifications identified in validations, etc. The PD sessions were also not connected to the RTO’s operations and participants couldn’t use procedural skills back on the job, and therefore, the RTO didn’t improve assessment practices.

Failure to Recognise Non-Training Solutions If the wrong solution is implemented, little or no payoff will result. Too often, training is perceived as a solution for a variety of performance problems when training may not be an issue at all. A recent evaluation of a community college illustrated this pro-

blem. In its training program, the college attempted to prepare career counsellors so they could provide advice to potential students about training products. The problem the college had was a significant number of students enrolled into inappropriate courses. This meant the training produced little change in the outcomes. An impact study subsequently revealed that the culprit was the enrolment procedure that accepted enrolments prior to potential students’ interviews with career advisers. When probed for a reason for the poor results, the college realised that unless its enrolment procedure changed to provide time for career advisers to interview potential students prior to enrolments being accepted, the results would

EFFECT, NOT JUST LEARNING. not change. Attempting to solve job performance issues with training will not work when factors such as systems, job design and motivation are the real issues. To overcome this problem, staff training must focus on methods to analyse performance rather than conduct traditional training needs assessments – a major shift in performance improvement that has been developing for many years. Up-front analysis should be elevated from needs assessment, which is based on skills and knowledge deficiencies, to a process that begins with business needs and works through the learning needs.

Lack of Specific Direction and Focus Training should be a focused process that allows stakeholders to concentrate on desired results. Training objectives should be developed at higher Kirkpatrick levels than traditional learning objectives. These objectives correspond with six measures that lead to a balanced approach to evaluating the success of training. Most training programs

should contain objectives at multiple levels, ideally including those at Levels 3 and 4. An RTO’s internal training is often decided without consulting all stakeholders. What are the RTO’s performance needs for the CEO, the Marketing Manager, the Training Manager, the Quality and Compliance Manager? When developed properly, and in consultation with all relevant stakeholders, these objectives provide important direction and focus. Training designers and developers must focus on application and effect, not just learning. Facilitators need detailed objectives to prepare individuals for the ultimate outcomes of the learning experience: job performance change. Participants need the direction provided by Level 3 and 4 objectives to clearly see how the training program’s outcome will actually help the RTO. Not all programs will need to undergo such detailed up-front analysis, but it is a critical issue that needs more attention, particularly when training is expected to have an effect on the RTO’s performance.

Transform your ideology on COMPLIANCE and ASSESSMENT Evidence Management



CF OH A N G E R THE BETTER Switching to a new LMS is a big decision. Canvas comes with sky-high adoption rates, unparalleled services and support, and an open, innovation-obsessed platform. Because if you’re going to make a big decision, make the right one.

To learn more about Canvas or request a free demo, visit https://www.canvaslms.com.au/vet or call 1 300 956 763


BREAKING BAD DESIGN HABITS B eing a training designer presents daily opportunities to challenge ourselves, push boundaries, and design solutions that make our businesses successful, clients happy, and learning audience more effective. But sometimes our work isn’t inspirational, and it’s really hard to get ourselves psyched-up to change the world thinking about compliance frameworks and guidelines. A lack of inspiration can be made even worse with a healthy dose of design complacency. When we fall back on bad design habits, we can alienate ourselves from the real problems, our audience, and ultimately our professional self-worth. Are you guilty of any of these bad design behaviors?

Skipping the needs analysis Raise your hand if you’d be willing to undergo major surgery without first undergoing some less-invasive testing? Not much of a choice, is it? Most of us wouldn’t be too keen on trusting the word of a doctor who would literally operate on a hunch. We’d like some empirical evidence before we invest in a costly, painful, and potentially risky procedure. Yet, how many of us are guilty of skipping over the training version of pre-operative testing? How often do we tell our business cohorts to “just trust us” about the root cause of a performance gap? At one time or another, most of us have been forced by timing or circumstance to eliminate or minimize the needs analysis process. But when we short-change front-end analysis or dismiss it all together, we become the equivalent of a quack — randomly applying training solutions without first understanding what’s needed.

Getting married to a concept Confession: Sometimes I fall in love with my own ideas. When you love what you do and are passionate about the value of learning it’s easy to get super-excited about your work. But with all that excitement there 8 InVETMAGAZINE

often comes the “reality check” moment when a client tells you that the high-concept idea you’ve fallen for is hopelessly impractical. Does that mean it’s time to toss aside your brilliant concept and settle for something more mundane? Not necessarily. Here’s how I see it: The essence of being a good designer is working within tight constraints. Our best designs surface when we allow those boundaries to inspire us rather than defeat us. The most effective designs are ones that manage to achieve a balance in the midst of many opposing factors. The ultimate designer skill you bring to the table is this: a willingness to set-aside pre-conceived notions and apply elements of your out-of-the-box thinking in ways that balance business needs and resources with beauty and ingenuity.

Speaking in jargon I was working with a client recently when I caught myself saying, “Level 1 data is not enough to demonstrate learning nor application of learning.” While that statement was in fact true, and the collection of “reaction/ satisfaction” data (Level 1) is not enough to paint the big picture, and collecting data about learning (Level 2), and application of learning in the workplace (level 3) is required, I could tell by the expression on her face that I’d lost her. Good designers know that you need to have meaningful conversations to build a successful collaboration. This means you need to speak in language that everyone understands. Whether you’re trying to get buy-in for your design ideas with business leaders or act as a credible internal consultant to a group of subject matter experts, you really can’t afford to alienate anyone with jargon-filled statements that are less about building understanding and more about demonstrating your design prowess. To be sure, it may be necessary on occasion to load your statements with design jargon to establish your authority. But for most interactions, wouldn’t it be much more productive to communicate as a trusted partner— in a language everyone understands?

RTO 30122



Trainer’s upgrade: a cost or a solution?


Like other compliance requirements, many people in this sector believe there has been limited analysis regarding how this training will affect an RTO. Yes, it means the RTO will comply with the standards, but will the added cost solve any problems? 10 InVETMAGAZINE

ince the Assistant Minister for Vocational Education and Skills, the Hon Karen Andrews MP, announced the most recent amendment that affects the requirements for trainers and assessors to work in VET, many RTOs’ managers and trainers have considered this as another compulsory course that will be recorded as an operational expense. Like other compliance requirements, many people in this sector believe there has been limited analysis regarding how this training will affect an RTO. Yes, it means the RTO will comply with the standards, but will the added cost solve any problems? If this training doesn’t have a positive effect on the stakeholder’s performance and results it is not a solution. In this article, I would like to analyse the desired and potential effects of this TAE upgrade for RTOs and trainers. Firstly, let’s clarify the requirement. Under the updated Standards for RTOs, trainers and assessors using the TAE40110 Certificate IV in Training and Assessment as their teaching credentials must hold the following two units before 1 April 2019: n TAEASS502 Design and develop assessment tools, and n TAELLN411 Address adult language, literacy and numeracy skills.

of students with course entry LLN levels.

Why are trainers required to further increase skills in developing assessment tools and addressing adult LLN skills?

Can these issues be solved with training?

According to statistics published by ASQA, approximately 75% of RTOs fail to demonstrate compliance against assessment practice requirements, and matching LLN skills

Is there a performance issue? Yes, there is a clear performance issue with assessment practices. Assessment systems used by RTOs are not meeting training package requirements, principles of assessments, and do not produce sufficient, valid, authentic and current evidence. The second issue is related to students being enrolled into courses without determining whether entry LLN skill levels have been met.

What is happening or not happening? Based on my experience as an auditor, I have identified five critical factors that affect RTOs assessment practices: 1 Units of competency are not unpacked effectively. 2 Assessment evidence is not analysed correctly. 3 Assessment collection methods, tasks and evidence are poorly mapped to the unit of competency requirements. 4 Adequate instructions are not given to assessors on how to administer assessment tools and interpret assessment evidence. 5 Inconsistent administration of assessment tasks.

We can only solve problems with training if there is a gap in skills. And yes, trainers and assessors currently working in VET have significant gaps in skills/knowledge, particularly those required to: 1 Interpret units of competency. 2 Develop effective assessment

tools (instructions and tasks) to collect evidence against the requirements of units of competency. 3 Implement assessment practices in line with the Principles of Assessment, and 4 Collect assessment evidence that meets the relevant unit of competency requirements and the Rules of Evidence. But performance issues go beyond an RTO’s assessment practices. They directly relate to gaps in the skills of its trainers, lack of support and effective quality assurance systems, which play an important role.

Is TAEASS502 Design and develop assessment tools the solution? It could be, but it won’t if we continue to do the same as we have being doing with the previous upgrades BSZ to TAA and TAA to TAE. Let’s start with the outcomes included in the unit. TAEASS502 elements are: n Determine the focus of the assessment tool n Design the assessment tool n Develop the assessment tool, and n Review and trial the assessment tool. This unit is relevant to four out of the five performance issues listed above, and will provide trainers with the opportunity to develop at least the first two sets of skills listed in the skills gap. When the training solution is designed, developed, delivered and assessed, the impact objectives must be considered. In other words, this course must be adopted not only as the training to meet the new requirement under clause 1.14 (trainers’ credentials), but as the training solution that will support the RTO to meet

ST ASSESSMENT PRACTICE RE-QUIREMENTS the requirements under clause 1.8 (assessment practices). Considering the structure of the Standards for RTOs, being non-compliant with clause 1.8, will also produce non-compliance with clauses 1.4, 1.12, 2.1, 3.1 and 8.4. Furthermore, this course should also have a positive effect that improves the compliance status with clauses 1.9, 1.10 (validations) and 1.16 (trainers’ relevant PD). In summary, a Statement of Attainment with the TAEASS501 unit can give the RTO a tick in clause 1.14, but the real benefit, and return on investment, will only happen if trainers develop the skills required to perform the necessary tasks to meet requirements under clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4. If we compare the cost of the course with the benefits of maintaining compliance with clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4, the potential positive return on investment is evident. RTOs then should see this course as an investment and not a cost. An investment that will produce real, tangible benefits far greater than the investment itself, and I will suggest that RTOs should measure this benefit. Obviously, if the course doesn’t produce a positive effect on operations, the investment will become a cost. This means it is critical that RTOs discuss with the training provider the desired application and impact objectives for the course. RTOs will need to ensure trainers and assessors will have the opportunity and the support to apply the skills learnt. This may require a change to current practices. For example, trainers should be more involved in designing, developing and reviewing assessment tools, and validation processes may need to be strengthened so they have a greater effect as quality review and control processes.

How can we measure the application of the skills? What data needs to be collected? There are some points that need to be considered here: n What new knowledge will be applied?

n What new tasks will be performed? What new steps? n What new procedures and processes will be implemented or changed? n What new guidelines will be implemented or changed? The answers to the above questions will help us to determine what data will be collected. For a standard RTO, new tasks could include: interpreting unpacking units of competency, analysing assessment evidence required, considering learners’ needs during the design of assessment tools, considering the rules of evidence during the design of the evidence collection plan, or reviewing mapping documentation. These tasks/steps will have an effect on the processes of designing, developing, and using assessment tools, and for this reason, RTOs must review/update procedures and guidelines that are already in place, to support the application of the new skills. The reason to measure the application is not only to confirm the success of the training, but also for continuous improvement. The analysis of the application data should reveal if the skills could be enabled or if there were any barriers. The RTO can use this information to overcome barriers and better exploit various ways to maximise the positive effect on the assessment practices.

How can I measure the effect of the application of the new skills? At this level, the RTO wants to measure the effect on assessment practice outputs, quality, cost and time. A Statement of Attainment with the TAEASS501 unit can give the RTO a tick in clause 1.14, but the real benefit, and return on investment, will only happen if trainers develop the skills required to perform the necessary tasks to meet requirements under clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4. A Statement of Attainment with the TAEASS501 unit can give the RTO a tick in clause 1.14, but the real benefit, and return on investment, will only happen if trainers develop the skills required to perform the necessary tasks to meet requirements under clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4. Costs can be determined by measuring the reduction of costs associated with engaging external consultants to develop assessments, costs associated with rectifying assessment tools, and/or assessment evidence collected. Finally, the RTO can measure, for example, a reduction of time required to develop/ modify assessment tools. The opportunity is there and whether this upgrade will have a positive effect on our VET sector will depend on RTOs and the trainers’ approach.

A Statement of Attainment with the TAEASS501 unit can give the RTO a tick in clause 1.14, but the real benefit, and return on investment, will only happen if trainers develop the skills required to perform the necessary tasks to meet requirements under clauses 1.4, 1.8, 1.9, 1.10, 1.12, 2.1, 3.1 and 8.4.



Using Games in Training S tudies have shown that the use of games in training serves two purposes:

Optimize the environment. Be considerate of how partici-pants learn, and set up games accordingly. Maximize the opportunity to learn by tailoring game usage to make even the most timid wallflower flourish. 12 InVETMAGAZINE

1) Resetting participant concentration and energy levels. The human mind can only absorb so much information at one time. Successful training is commonly segmented into blocks of approximately 20 minutes followed by group problem solving, open discussion, and games. Using games in this way increases knowledge retention and keeps attention spans high. a) The effectual execution of games plays a large role in knowledge retention. When used during training, games provide an enjoyable way of reinforcing knowledge and skill use. And when used after training as part of on-thejob reinforcement, games provide a quick and fun refresher of what was learned during training. Games break the ice, energize, and most importantly, reinforce and review learning. Game-based learning activities build confidence, lift morale, spark enthusiasm, and ultimately, achieve results. When you’re considering whether or not to use a game in training, ask yourself if the game will do one of the following: l Provide social interaction l Energize the group l Reinforce learning. If a game you are considering does not meet one of these crucial requirements, rethink your selection.

Why Social Interaction Is Important When participants are gathered together for classroom training, they are often meeting face-to-face for the first time. Beginning class

with a game that encourages social interaction can create common bonds between participants and make them more comfortable, which promotes open speech and increased sharing. Optimize the environment. Be considerate of how partici-pants learn, and set up games accordingly. Maximize the opportunity to learn by tailoring game usage to make even the most timid wallflower flourish. It requires very little preparation–simply hand-out a small coin to each participant. It is helpful if the dates on the coins are in a suitable range for participant ages. Group participants into pairs and ask them to exchange basic information and share a favorite memory from the year stamped on the front of their coin. After a few minutes, ask participants to share their partner’s information. Sharing creates an immediate bond and assists participants in feeling comfortable in the group training environment.

Energize Class Participants to Reset Focus Mixing in a game or two during extensive class time is an ideal way to rejuvenate participants and get them back on track and focused on the important material you’re teaching. Games that allow participants to stand up and move around–perhaps even laugh a little–are great ways to keep everyone fresh and focused. Adding a competitive edge to a game is a surefire way to engage participants. “Alphabet Improv” is a variation of a popular party game in which participants have spontaneous conversations by beginning each statement with a particular letter of the alphabet. Divide the class into pairs and tell them to imagine they are in a typical work setting and are to have a conversation by alternating each statement with a consecutive letter in the

alphabet. The pairs can practice and then play in front of all class participants with the winning pair receiving a prize or play less competitively in pairs. Be warned that there is usually a lot of laughing as employees struggle to begin conversations with the correct letter. EXAMPLE: Participant #1: All next week, our 2012 products will be on sale. Participant #2: Bet that will be a busy week! Encourage participants to play quickly and spontaneously and not think too hard about what to say next.

Games to Reinforce Learning The use of role playing games increases not only knowledge retention but also understanding by a significant rate. Training that incorporates real life scenarios for the participants makes the class experience more relevant and more likely to assist in long term behavioral change. When creating role-playing activities, select story lines that benefit the majority of learners. Clearly state the objectives of the employees participating in the role playing exercise. One game to reinforce learning is Luck of the Draw. Ahead of time, prepare slips of paper with a question and answer related to the material covered (include a few trivia questions). Put the slips of paper in a hat, bag, or bowl. Be sure you have enough question and answer slips so that each team has an equal number of opportunities to score points. For example, if you have 3 teams, you’d want 15, 18, 21, or 27 questions. Determine a reasonable amount of time for teams to answer questions. Divide the class into the predetermined number of teams. Ask one member of the team to

come to the front of the class and select a slip of paper. This person is to read the question to their own team. The trainer keeps score. If the team gives the correct response, they get 2 points. If the team can’t answer or answers incorrectly, the next team in rotation gets an opportunity to provide a correct answer for 1 point. This game is a fun way to both reinforce learning and energize the class.

Five Tips for Effective Game Use in Training 1) Make it relevant. Align chosen games with training goals, keeping all activities on topic and engaging to participants. 2) Consider your audience. When selecting games, keep

participants in mind. Will prizes add excitement and encourage participation or cause the class to segment and become unruly? 3) Optimize the environment. Be considerate of how participants learn, and set up games accordingly. Maximize the opportunity to learn by tailoring game usage to make even the most timid wallflower flourish. 4) Watch your timing. While it is important to watch the clock – do not overdo it. Allow time for 90% of participants to finish before officially ending the activity. Ending the activity too soon will lessen its effect and allowing too much time will give opportunity for partici-

pants to lose focus. 5) Create movement. Keeping a room full of people engaged for hours on end is not easy. Choose games that require movement to rejuvenate the class and get them ready for more learning. Simple, relevant games enhance traditional training methods by creating a comfortable environment for learning, keeping participants energized, and encouraging early adoption of desired performance. Simple, relevant games enhance traditional training methods by creating a comfortable environment for learning, keeping participants energized, and encouraging early adoption of desired performance.

Simple, relevant games enhance traditional training methods by creating a comfortable environment for learning, keeping participants energized, and encouraging early adoption of desired performance.


Internal Audit Service Achieve And Maintain Compliance For Your RTO If you operate an RTO you will understand just how important compliance is to your business. Insources internal audit service ensures your RTO is meeting the requirements of the VET Standards. We provide an accurate and independent picture of your RTO’s compliance status by validating your organisation’s performance, and presenting fresh perspectives and ideas based on best practices.

Why Perform An Internal Audit Manage Compliance Based On Evidence Of Your RTO’s Performance Promote A Continuous Improvement Culture Within Your RTO Benchmark Your RTO Against Industry Best Practice Validate Your Systems Measure The Results Of Your Operations

What Our Clients Are Saying “Audits always take a lot of energy and are quite taxing, however, our entire team agreed that working with Insources was a very positive experience. Their approach and service was exactly what we were looking for. We look forward to partnering with Javier and the team at Insources for our RTO requirements and are confident that this will support our RTO to grow and flourish in the years ahead.”

“We recently engaged Insources to perform quality auditing for our organisatoin. Their level of knowledge of the VET quality framework is superb, as is their ability to communicate what is needed in a practical sense to fix problems. They demonstrated an ability to quickly understand the various disciplines we operate in and were able to offer really practical advice on how to improve our training and assessment practices.”

Jan Hurn, CEO Wesley Health Management

Damian McKenzie-McHarg, Manager, Quality & Risk TAFE NSW Riverina Institute

For More Information Contact Us On 1300 208 774

Internal Audit Service An internal audit is essential to identify non-compliances or gaps in your RTO’s performance, which can put your RTO’s registration or funding contracts at risk. Internal audits are also a valuable tool to identify areas for continuous improvement. Insources has developed an audit process that ensures a high quality result in a time effective manner. The basic process we follow is below: 4. Comprehensive Audit Report And Action Plan Detailed report including all findings and list of recovery actions prioritising noncompliances and opportunities for improvement (issued within 14 days of on-site visit).

1. Initial Planning Meeting Teleconference/online meeting prior to the site visit. During this meeting, we will request some information that can include documented samples of training and assessment strategies and assessment tools. 2.

Desk Audit Our consultant reviews the selected material prior to the site visit.


On-Site Visit The audit takes 1-2 days (this may vary according to factors such as scope or registration, number of students, locations and funding arrangements) and includes opening and closing meetings to meet the Regulator’s Standards. All relevant Standards are audited.

5. Post Audit Meeting/Training Our consultant will present the report and explain alternatives, best practices, and critical concepts required to successfully complete the action plan. 6.

Free Training You receive free registration to five professional development webinars for your staff.

More Than Compliance

Why Work With Insources

Our service goes beyond a standard compliance report. We

We specialise in working within the Vocational Education

provide advice and examples of best practices, a complete

and Training industry and have been helping businesses

action plan to rectify any identified non-compliances, and

in this sector for more than 10 years. In that time,

highlight opportunities for continuous improvement. We

we’ve helped many businesses not only to achieve

share our extensive industry experience and knowledge

their compliance and continuous improvement goals,

by offering a two-hour post-audit online training session.

but also to do it with confidence. Whether your RTO

During this session an Insources consultant presents

needs support with audit preparation, re-registration,

the report to your team, outlining best practices and

independent external validation, systematic review

detailing the proposed action plan. Also included are five

of training products, achieving best practices in your

complimentary registrations for Insources professional

training and assessment, or professional development

development webinars. By participating in these webinars

for your staff members, talk to our expert team to find

your team can develop further skills in maintaining RTO

out how we can support your business to thrive.

compliance. More than 20 different webinars are available - see website for details.



1300 208 774

Insources Education Pty Ltd. Suite 3, 16 Sorrell Street, Parramatta, NSW, 2150


Using Skill Sets to meet industry needs V ocational Education and Training must provide solutions and support individuals and industry in Vocational Preparation and Vocational Development (Continuous Professional Development). Although our VET system is a leader in Vocational Preparation, mainly because of government funding conditions, RTOs are losing opportunities in Vocational Development programs. Non-accredited training programs are providing an incredible range of learning opportunities to support our workforce with professional development. These programs are presented in different formats, from online platforms and symposiums, to summits and conferences. And, importantly, these micro-learning options are meeting current industry needs. To compete in a corporate training and development world, RTOs should look at these opportunities, and use micro-learning techniques to meet that demand. The flexibility of training packages that allows for the delivery of stand-alone units and skill sets, is not recognised for government funded programs, which today accounts for more than 70 per cent of all VET training delivered in Australia. Rapid changes in industry processes and technological advances, together with the definitive adoption of robotics in the workplace, have created a growing need for continuous development of skill sets. The Australian government should update funding programs to include skill sets and stand-alone units, as this is the easiest way to measure the return on investment in these training programs. I started looking at international trends for micro-learning, and discovered some interesting statistics. According to the Association for Talent Development (ATD) 92 per cent of organisations (worldwide) are using micro-learning plans, and over 67 per cent of organisations not using micro-learning are planning to start. For RTOs to develop industry relevant training products, we should look at these statistics. Micro-learning techniques have three primary benefits and this is why organisations are considering these options: l Micro-learning is cheaper and faster. Materials take less time to source, produce, maintain and consume, than full qualifications. This enables re-use and re-packaging of micro-learning programs. It also allows trainers to focus on quality without sacrificing amount of training, because


those irrelevant skills are not included in the program. l People are more engaged. Employees today devote 1 per cent of their time to learning (roughly 24 minutes a week), check their phones 150 times a day, and switch tabs every minute. Micro-learning fits perfectly into this continuous diet of email, Slack, and social media. l People learn more. Though there are many factors that drive effective learning, managing cognitive load is one of the most important. The problem with typical learning experiences like lectures or long e-learning videos is that they present too many things at once for too long a period of time. These are real benefits, but they don’t necessarily translate to improved performance on their own. Through industry consultation we discovered that timing plays an important part, and the key is to have a training solution to solve current problems. One of the most difficult and least scalable things organisations must do is motivate their employees, and learning requires a lot of sustained motivation. Compliance training is a good example. But how can we identify the right time when our participants’ motivation is high? There are reliable triggers that open up motivational windows in which individuals are willing, even excited, to learn. These windows can last from a few months (Think: when someone is given a new role or responsibility), to a few weeks (Think: when someone has a big deadline or presentation coming up), to a few minutes (Think: when someone is walking into a big meeting for which they’re not fully prepared). In today’s competitive environment, RTOs are required not only to set Learning Objectives to describe what participants will be able to do at the end of the training, but also Application Objectives to determine how and when those skills and knowledge can be used and applied to attract participants at the right time (highly motivated). Learning experiences presented to learners at the wrong time will produce little or zero results, and the margin for error is very slim. Continuous review of our VET Sector, Training Packages, funding arrangements is required, and our Nationally Recognised Training System should be adapted to meet emerging needs in vocational education and adult learning trends. This new generation of micro-learning solutions is certainly making an impact.


Is VET Trapped in The Capabilities vs Performance Issue?


requently, I encounter VET practitioners whose actions and comments indicate they assume building capability and enhancing performance are the same. Learning alone will not yield performance results. There is no business or performance measure that improves because of what people know; these measures improve because of what people do with what they know. VET practitioners do not have control over what our students do with what they learn, and very little is done to measure performance results.

What Is the Difference Between Capability and Performance?  Enhancing capability or skill is a  learning outcome. It means people have the capability to perform in some manner. It does not mean that they will, only that they can. A  performance outcome  occurs when people take what they know and turn it into what they do on the job.

And, of course, making the conversion from learning to doing requires a work environment that supports the capability that has been developed. Engaging industry stakeholders when planning our Training and Assessment Strategies will help individuals and organisations to use the capabilities we develop in the VET sector, to improve performance. A good process to go through with an industry stakeholder is to review the “skill. . .will. . .hill “ process; and work together to develop better training evaluations. People develop skills but then need both the will (motivation) to apply that skill and ability to overcome any hill (obstacle) in the work environment that could impede application. Only then can performance be driven by the capability that has been developed. For this to happen, we need more and better collaboration between RTOs, SSOs and Industry. We know that performance is what people do on the job. We also know

that, too frequently, people acquire capability that they never use on the job. Yet, VET training is expected to yield results. Training Package Developers play an important role here. As VET professionals, we need to make performance—and not just learning—our business. And we can do that in two ways: 1. Keep the difference between skill and performance clear in our minds. Training Packages are Occupational Standards and should focus on outcomes and performance. 2. We view building capability as a means to the end, not the end. Our end goal is to enhance on-the-job performance that benefits the organisation. Industry engagement will provide information about how the work environment will support skills we plan to develop. We need to partner with industry that can work with us to ensure skills will transfer to the workplace.



10 Things Every Learner Needs to Know W 1 ill the effective skills your participants are learning about in the classroom translate to what they do in the workplace? As someone who cares deeply about this question, I’m going to make two bold statements, and then make a recommendation. Without a basic understanding of how learning happens, which is outlined in the rest of this article, there is very little chance that program participants will actually implement what they learned. No many training organisations are presenting to training partici-


pants with this perspective. At Insources, we use not only reaction and learning objectives, but also application and impact objectives, when designing training. We discuss those application and impact objectives with participants during training and we provide some mechanisms for them to follow up the achievement of those objectives. It’s important to give your learners a clear understanding of the kind of follow-through they’ll need to perform to ingrain new work habits. I strongly recommend that you reproduce the following information, give it to every learner, and have

them read it (and, ideally, discuss it) at the beginning of every program. KNOWING WHAT TO DO ISN’T THE SAME AS DOING IT. You can learn what to do through classroom instruction, books, videos, and articles. But this aspect of learning is only the beginning. Acquiring knowledge doesn’t guarantee that you’ll apply it when you need to. As Morpheus told Neo in the sci-fi movie, Matrix, “There’s a difference between knowing the path and walking the path.” Behavior is what counts. And most behavior in a busy workplace is a result of habit, not conscious decisions.

problems.” No, it will simply start connecting the brain cells for the behavior each time you repeat it. This is why you could end up interrupting people when they’re talking. Or yelling at them when you get upset. Or procrastinating when faced with a difficult decision. MOST ADULTS HAVE WIRED A LOT OF BAD HABITS OVER THE YEARS. During your life so far, you’ve developed “your way” of doing lots of things. And your way of interacting with others probably includes a few behavior patterns that cause problems. This is because practically nobody was taught the best practices when they were young. You picked up ways of dealing with family, friends, co-workers and others “on the street,” so to speak. And some of these patterns may not work well when dealing with managers, team members, and customers. So, when you attend a training course to learn better ways of dealing with people, you don’t walk in with a blank slate. You have your own familiar, comfortable ways of handling things. Your challenge will be to do the work after instruction in order to rewire your brain. TAKE RESPONSIBILITY FOR YOUR OWN LEARNING. A trainer can show you how to improve. Your boss can encourage you to change, but only you can make this happen. Only you can do the work to rewire your brain for a new skill or work habit. REWIRING FOR A NEW SKILL WILL TAKE A LOT OF REPETITIONS. What happens in the classroom is a great start, but it’s just the beginning. Most of the effort of learning has to happen after instruction. Back on the job, you’ll need to use what you learned. Like mastering a sport skill, it will take practice, practice and more practice before the brain cells involved will physically interconnect into a circuit that makes the skill feel natural. So you must do the reps, or you’ll eventually go back to your old way of doing things. ACCEPT THAT AT FIRST YOU’LL HAVE FAILURES AND SETBACKS, AND DON’T GIVE UP. Even if you value what you learned and fully intend to implement it, at first you may forget to do so. Or if you make a cons-



SKILLS, HABITS, AND ROUTINES ARE HARD-WIRED IN THE BRAIN. You need to appreciate what’s really going on when you master a new skill. When you repeat a behavior, the brain cells involved in the behavior are stimulated to connect with each other. With enough repetition, physical circuits form that enable you to repeat the behavior easily and quickly. This is true whether it’s your golf swing or the way you deal with other people. This means that the way you do things now is already hard-wired in your brain. It also means that to improve a skill or work habit, you need to rewire your brain. THE BRAIN WILL WIRE ITSELF FOR BOTH GOOD AND BAD HABITS IF YOU REPEAT THE BEHAVIOR OFTEN ENOUGH. Your brain doesn’t distinguish between effective and ineffective patterns. It will never say, “Wait a minute, I can’t program that for you because it will cause you


5 6


cious effort, the skill may feel awkward and ineffective. Almost everyone experiences this kind of frustration initially. The habits you already have get in the way of the new habits you’re trying to adopt. You’ll be tempted to give up trying. You may think, “This doesn’t feel right. I don’t think this is going to work for me.” The key is to persist past this “crunch point.” If you keep trying, you’ll forget less often. Your efforts will start to achieve results. Keep trying and your “failure rate” will eventually approach zero. The new habit will become dominant. You’ll find yourself performing the new, improved skill without consciously deciding to do it. FOCUS ON ONE SKILL OR WORK HABIT AT A TIME. If you’re an ambitious individual, you may want to correct several behavior patterns all at once. This would be a mistake. In a busy workplace, you’ll find that it’s hard enough to apply one new skill repeatedly. Trying to work on several skills simultaneously will water down your efforts. You won’t get enough reps to improve any of them. So focus on one area until it starts to feel natural and you’re having success. Doing so will be an outstanding personal achievement. Then you can focus on improving something else. LEARN FROM YOUR MISTAKES. Your early efforts may be discouraging, but you can use these experiences to improve. Ask yourself: What happened? What did it happen that way? What should I consider doing differently to get better results? There are lessons to be learned from every experience, if you take time to reflect on it. GET HELP. Ask people who care about your development—your boss, co-workers, other training participants—to help you stay focused, encourage you and hold you accountable. This kind of support coaching can accelerate your learning. Ask for their input, ideas and feedback. Ask them whether they’ve noticed improvement, and get their suggestions for how you can perform better.




Without a basic understanding of how learning happens, which is outlined in the rest of this article, there is very little chance that program participants will actually implement what they learned.



Turning Negative Results into Positive Change L JACK PHILLIPS BY JACK J. PHILLIPS, PH.D.


earning and Development professionals often must evaluate their key learning programs, collecting several types of data—reaction, learning, application, impact, intangibles and maybe even return on investments. What if the evaluation produces disappointing results? Suppose application and impact were less than desired, and the ROI calculation negative. This prospect causes some learning executives to steer clear of this level of accountability altogether. For some L&D professionals, negative results are the ultimate fear. Immediately, they begin to think, “Will this reflect unfavorably on me? On the program? On the function? Will budgets disappear? Will support diminish?” These are all legiti-

mate questions, but most of these fears are unfounded. In fact, negative results reveal the potential to improve programs. Here are 11 ways to address negative results and use them to facilitate positive transformations:

1. Recognize the Power of a Negative Study When the study results are negative, there is always an abundance of data indicating what went wrong. Was it an adverse reaction? Was there a lack of learning? Was there a failure to implement or apply what was learned? Did major barriers prevent success? Or was there a misalignment in the beginning? These are legitimate questions about lack of success, and the answers are always obtained in a comprehensi-

ve evaluation study.

2. Look for Red Flags

Indications of problems often pop up in the first stages of initiation—after reaction and learning data have been collected. Many signals can provide insight into the program’s success or lack of success, such as participants perceiving that the program is not relevant to their jobs. Perhaps they would not recommend it to others or do not intend to use it on the job. These responses can indicate a lack of utilization, which usually translates into negative results. Connecting this information requires analyzing data beyond overall satisfaction with the program, the instructor and the learning environment. While important, these types of ratings may not re-

veal the value of the content and its potential use. Also, if an evaluation study is conducted on a program as it is being implemented, low ratings for reaction and learning may signal the need for adjustments before any additional evaluation is conducted.

much better. In most situations, there is little doubt as to what went wrong and what can be changed. In worst-case scenarios, if the program cannot be modified or enhanced to add value, it may mean that it should be discontinued.

3. Lower Outcome Expectations

10. Adjust the Story Line

When there is a signal that the study may be negative, or it appears that there could be a danger of lessthan-desired success, the expectations of the outcome should be lowered. The “under-promise and over-deliver” approach is best applied here. Containing your enthusiasm for the results early in the process is important. This is not to suggest that a gloom-and-doom approach throughout the study is appropriate, but that expectations should be managed and kept on the low side.

4. Look for Data Everywhere Evaluators are challenged to uncover all the data connected to the program—both positive and negative. To that end, it is critical to look everywhere for data that shows value (or the lack of it). This thorough approach will ensure that nothing is left undiscovered—the fear harbored by many individuals when facing negative results.

5. Never Alter the Standards When the results are less than desired, it is tempting to lower the standards—to change the assumptions about collecting, processing, analyzing and reporting the data. This is not a time to change the standards. Changing the standards to make the data more positive renders the study virtually worthless. Without standards, there is no credibility.

6. Remain Objective Throughout Ideally, the evaluator should be completely objective or independent of the program. This objectivity provides an arms-length evaluation of its success. It is important not only to enter the project from an objective standpoint, but also to remain

objective throughout the process. Never become an advocate for or against it. This helps alleviate the concern that the results may be biased.

7. Prepare the Team for the Bad News As red flags pop up and expectations are lowered, it appears that a less-than-desired outcome will be realized. It is best to prepare the team for this bad news early in the process. Part of the preparation is to make sure that they don’t reveal or discuss the outcome of the program with others. Even when early results are positive, it is best to keep the data confidential until all are collected. Also, when it appears that the results are going to be negative, an early meeting will help develop a strategy to deal with the outcome. This preparation may address how the data will be communicated, the actions needed to improve the program and, of course, explanations as to what caused the lack of success.

8. Consider Different Scenarios Standards connected with the ROI methodology are conservative for a reason: The conservative approach adds credibility. Consequently, there is a buy-in of the data and the results. However, sometimes it may be helpful to examine what the result might be if the conservative standards were not used. Other scenarios may actually show positive results. In this case, the standards are not changed, but the presentation shows how different the data would be if other as-

sumptions were made. This approach allows the audience to see how conservative the standards are. For example, on the cost side, including all costs sometimes drives the project to a negative ROI. If other assumptions could be made about the costs, the value could be changed and a different ROI calculation might be made. On the benefit side, lack of data from a particular group sometimes drives a study into negative territory because of the “no data, no improvement” standard. However, another assumption could be made about the missing data to calculate an alternative ROI. It is important for these other scenarios to be offered to educate the audience about the value of what is obtained and to underscore the conservative approach. It should be clear that the standards are not changed and that the comparisons with other studies would be based on the standards in the original calculation.

9. Find Out What Went Wrong With disappointing results, the first question usually asked is, “What went wrong?” It is important to uncover the reasons for the lack of success. As the process unfolds, there is often an abundance of data to indicate what went wrong. The follow-up evaluation will contain specific questions about impediments and inhibitors. In addition, asking for suggestions for improvements often underscores how things could be changed to make a difference. Even when collecting enablers and enhancers, there may be clues as to what could be changed to make it

When communicating data, negative results indicate that the story line needs to change. Instead of saying, “Let’s celebrate—we’ve got great results for this program,” the story reads, “Now we have data that show how to make this program more successful.” The audience must understand that the lack of success may have existed previously, but no data were available to know what needed to be changed. Now, the data exist. In an odd sort of way, this becomes a positive spin on less-than positive data.

11. Drive Improvement Evaluation data are virtually useless unless used to improve processes. In a negative study, there are usually many items that could be changed to make it more successful. It is important that a commitment is secured to make needed adjustments so that the program will be successful in the future. Until those actions are approved and implemented, the work is not complete. In worst-case scenarios, if the program cannot be changed to add value, it should be terminated and the important lessons should be communicated to others. This last step underscores that the comprehensive evaluation is used for process improvement and not for performance evaluation of the staff. Negative study results do not have to be bad news. Negative results contain data that can be used not only to explain what happened, but also to adapt and improve in the future. It is important to consider the potential of a negative study and adjust expectations and strategies throughout the process to keep the negative results from being a surprise. In the worst-case situation, negative data will surprise the key sponsor at the time of presentation. InVETMAGAZINE 21

VET Is Not About Content


oo many trainers still stand behind a podium, relying on content to drive learning. It’s time for those trainers and instructional designers working in the Vocational Education sector to realise that content is not what drives learning in VET. We don’t teach content, we teach people. We teach people to achieve outcomes, to perform a job under industry standards.

Information overload. Students can watch speakers, read information sheets and research content at any time. Students need the interaction, the engagement and the experience. Internet provides access to info graphics, case studies, blogs, podcasts, videos, tweets, about almost anything. As VET practitioners, we can make good use of them, but these publicly available resources will not make a relevant

learners anymore; that’s not how people learn. Content is only one part of the equation. VET programs should always be based around the learn-say-do-reflect model. It’s about providing an experience. We can’t teach someone to ride a bike or drive or how to use new technology without putting them on the bike or in the car or the device in their hands. Attention-span deficit. We live in the digital era, where our mind switches on and off every 5 to 20 minutes. The average song you listen to is about three to four minutes. The average watching time of a YouTube video is three to five minutes. Any scene in a movie runs between a quick moment and no more than 15 minutes before switching to a new scene. It takes no more than 15 to 20 minutes to read any article in any paper. TED Talks are 18 minutes. Stories in the news last no more than a few minutes, unless they are documentaries. We can’t lecture or speak to learners (of any age) for

learning experience for our learners. We live in the Information Age—and there’s too much of it! For example, according to some estimates published by the Association for Talent Development (ATD), there are more than 120,000 books and texts on leadership development, with 3,000 more being published each year. We don’t have a content problem; we have a filter problem. We must filter that content through the context of whom we’re trying to connect with and teach. Content is what we’re pouring into people. Context is everything that makes those people unique. It’s why they’re doing the training: the conditions where they will be applying their learning, the expectations of their clients and workplace. It’s their age, interests, attention span, engagement level and beliefs. People learn in the silence. We learn in the pauses, reflection and meditation. Don’t you have your best ideas when meditating, in the shower, while driving, or when falling asleep? We learn in the spaces in between life. We can’t deliver lectures to

more than 15 to 20 minutes at a time. Their attention will be gone after that. People start wondering what’s next. They check their smartphones. They look at the clock. Students need more space. Spaced learning is about engagement, conversations and one-to-one interaction. It’s about exercises, simulations, demonstrations, and students teaching students. Spaced learning is about reflection, giving participants time during the session to turn their insights into actions. After a training course, people are going back to their lives, their desks, their email and texts, or the next most important thing on the list, but not back to reflect. VET programs must provide the framework to support student’s learning, post training. We need to provide action plans, explain exactly what they need to do immediately to get to the next level, and how to progress. In other words, follow-through on promises made with the learning objectives. VET is not about content because we don’t teach content, we teach people. We facilitate learning experiences. That’s what we do.

Some factors to consider


The Learning Styles Myth A ncient peoples invented colorful stories about powerful gods expressing themselves through the physical world. They were trying to make sense of their world, using the only tools available at the time. Similarly, educators have developed many theories about the learning process over the years. However, now that science has developed tools to test these theories, some of them are proving to be false. Learning styles is one of those former “truths” that many in the field now classify as “myth.

The Learning Styles Myth Here is the myth in a nutshell: People can be classified based on their learning style. Once we have this information, training will be more effective if delivered in a manner consistent with each learner’s preferred style. Taken to the extreme, this would require the designer to present the same material in multiple ways—video for visual learners, audio for auditory learners, and so on. Once this idea took hold, a whole cottage industry evolved to provide teachers and trainers tools to help them identify and apply

learning styles in their work. It’s big money, but shoddy science. Let’s take a closer look.

Science Examines Learning Styles

This is how science works. We develop a conclusion based on what we know and then we go out and test it. The results of the test either validate the hypothesis, disprove it, or suggest that further research is needed. When held up to this standard, learning styles falls apart. A recent study published in the Journal of Educational Psychology found “no statistically significant relationship between learning style preference ... and learning aptitude.” Multiple studies have reached similar conclusions. As Daniel Willingham states, there is no credible evidence that learning preferences have any impact on the effectiveness of learning. However, let’s be clear about what these studies are saying. It may well be true that each of us has a preference for how we receive information. What has been discredited is the belief that training must be aligned with that preference to ensure maximum

effectiveness. Today, many educators are still applying the learning styles concept to customize learning to the individual. We have taught a whole generation that learning should be presented to them according to their preference.

Taking Action If you are paying for a learning styles testing instrument there is good news. You can stop paying for a useless tool and put your money elsewhere. If you are building courses in multiple modalities to satisfy different learning styles, you can save your energy and develop the most effective training for the content. If you are conducting train-the-trainer or teacher education programs, you can delete the entire learning styles chapter and spend more time on the practical applications of neuroscience to learning. If you encounter a colleague who is not yet aware that learning styles has been largely disproven, you can help them update their understanding. Reference: Journal of Educational Psychology 2015, Vol. 107, No. 1, 64 –78



1 Question you Should Never Ask During Industry Consultation


recently met with a client who was preparing a industry consultation questionnaire for a new Training Product. Here is the crux of what he was planning to ask: What do learners need to learn? This is one question you should never ask employers (senior leaders). You are asking a tactical question to people who function at a strategic level. Simply put, they don’t know the answer. Senior leaders live in the world of results. So, ask them about results. You can then drill down into what employees need to do—and learn—with tactical managers in the organisation (supervisors).

Here is my interview guide for assessing “training” needs with senior leaders. Start by asking: What are the 24 InVETMAGAZINE

company’s goals for the coming year? Goals include both problems to solve and opportunities to exploit. For example, a problem might be to improve customer service scores and an opportunity might be to launch a new product.

Then, for each goal, you can dig deeper by asking the following questions: Could you describe this goal in detail? This discussion helps ensure that you are completely clear on what senior leaders have in mind. What operational results will indicate you’ve successfully achieved the goal? Knowing what senior leaders are aiming for allows you to make sure that the training product you design supports achieving these results. What do you plan to do to achieve this goal? Who will

support to achieve this goal? What are the tasks required to be completed? Listen for answers that indicate employees might have to learn to do something new or different. Such answers could include implementing a new system, changing job responsibilities, purchasing new equipment, launching a new product or service, entering a new market, and changing work processes. At the conclusion of this conversation, you won’t have all the information you need to fully develop the training product, but you will have a good idea of the job outcome, and tasks associated to that job outcome, critical information to select elective units of competencies. We define this stage as a preliminary consultation that also provides information about where you’ll need to follow up,

and how to establish a continuous engagement for the whole life-cycle of the new training product. In other words, who will be “touched” by this new training product, stakeholders? During your industry engagement activities, you’ll need to interview tactical managers and supervisors to access information about the workplace context (procedures, equipment, conditions, etc). They are in the best position to identify what employees will need to learn to implement the measures senior leaders are planning to take. It is only after you talk to these supervisors that you’ll have the answer to the question my client posed, “What do learners need to learn?” More importantly, the answer will be tied directly to the industry’s strategic objectives.




Profile for insources

InVET Magazine No 2  

Jan 4, 2018

InVET Magazine No 2  

Jan 4, 2018

Profile for insources