Page 1

Expl or i ngt heCogni t i veEl ement sof Thi nkAl oudPr ot ocol s Dr . S t e p h e nDo h e r t y


Exploring the Cognitive Elements of Think-Aloud Protocols

Dr. Stephen Doherty School of Applied Language and Intercultural Studies/Centre for Next Generation Localisation Dublin City University

Abstract Think Aloud Protocols (TAPs) have been adopted as a data elicitation method in cognitive psychology, usability/human-computer interaction, and, of particular interest here, psycholinguistic and translation process research. This paper compares applications of the method across these domains. Drawing from the literature, the research method itself is reviewed in terms of its perceived strengths and weaknesses. There follows a discussion of the combination of TAPs with eye tracking, a research design that attempts to overcome the shortcomings of standalone TAP research designs. It is argued that despite several significant disadvantages, the TAP method remains a useful tool when used in conjunction with other methods. The paper finishes with recommendations and considerations for adopting TAP designs.

Keywords: think-aloud protocol, eye tracking, introspection, meta-cognition, research methods, cognition

1


1. Introduction

This paper investigates the use of think-aloud protocols (henceforth TAPs) as a research method and their use across several disciplines. It provides descriptions of the various forms of TAPs and reviews their use in areas from psychology to usability/human-computer interaction, and psycholinguistics/translation process research. This review is followed by a summary of the strengths and weaknesses of the TAP method, and the advantages of combining TAPs with complementary data elicitation techniques are discussed. Finally, conclusions are drawn in the form of recommendations and considerations for the use of TAPs as a research method.

2. Think-Aloud Protocols

TAPs are a data elicitation technique in which a participant is asked to verbalise his/her own thoughts during or after performing a given task; these verbalisations are recorded as a think-aloud protocol or TAP. TAPs were adopted from psychology where they were developed primarily by Ericsson and Simon (1993). The method has since been used in many areas of psychology, e.g. cognitive processing, usability/human-computer interaction, and more recently in translation process research. In early studies in translation process research, for example, TAPs were a common type of data elicitation technique but they have since given way to technologically more advanced methods such as keystroke logging (Jakobsen 2006), eye-tracking (O’Brien 2006), and brain imaging (Gerganov et al. 2008) all of which have the potential be used in conjunction with the TAP method (see, for example, Doherty and O’Brien 2009). 2


Hannu and Pallab (2000) divide TAPs into two different types: concurrent, and retrospective. Concurrent TAPs are protocols where participants verbalise thoughts during the process under investigation, while retrospective TAPs take place after the process. Similarly, Dumas and Redish (1993) distinguish between verbalisations that articulate pure thoughts, and ‘thinking plus explanations’, often used in retrospective methods, where participants provide the reasoning behind their thinking and actions. Such ‘thinking about thinking’ is known as ‘meta-cognition’ (Metcalfe and Shimamura 1994). In a comparison of the two types, Hannu and Pallab (2000) find that concurrent verbalisation provides more insight into the steps leading to a decision while a retrospective approach provides more detail on the decision itself. On the other hand, Taylor and Dionne (2000) argue that concurrent TAPs have a detrimental impact on the validity of results, and favour retrospective methods (discussed in the following section).

Simply put: participants verbalise whatever they are looking at, thinking, doing, and feeling, as they go about their task. This enables the researcher to observe first-hand the processing of the task as it is being completed, rather than its final product, as would be the case for product-oriented research methods. In TAP designs where no other method of data recording is used, the researcher takes notes, as objectively as possible, of all actions and verbalisations made by the participants during the task. It is important for the researcher to do so without attempting to interpret the participant’s actions and words at the time of data capture. Task sessions are often audio or video recorded, so that researchers can go back and refer to what participants said, did, and how they reacted, etc. The purpose of this method is to make explicit

3


what is implicitly present when participants perform a specific task without interference or interpretation from the researcher.

TAP methods are further distinguished by the type of moderation adopted by the researcher during the TAP, which can be active or inactive. Active moderation focuses on providing each participant with individual treatment where the researcher asks questions on particular points of interest that arise during the experiment. In other words, the researcher can act as the cue for the verbalisation, and guide the participant in particular directions. Inactive moderation typically focuses on greater consistency and experimental control where there is little or no interaction with the participant, and each participant receives the same experimental treatment. The researcher may, for example, interact with the participant only if verbalisations have stopped, while another researcher may choose to record such pauses and hesitations as part of the data. Validity and consistency are of concern in designs that contain more active moderation.

TAPs and introspective methods of data elicitation appear frequently in psychology research in studies of, for example, problem solving and social attitudes. Findings from such studies highlight the possibility of data being confabulated (Nisbett and Wilson 1977) by participants, for example, in terms of decision-making processes (Johansson et al. 2006) and judgements made about others (Nisbett and Wilson 1977). Indeed, such findings call into question the interpretation of data accessed throughintrospection, and even whether we possess at all the ability to directly and fully access our own cognitive processes.

4


Pronin (2007) postulates the ‘introspective illusion’, which states that we give much greater weighting to conclusions we draw from our own introspection, and far less to our assessment of the cognitions of others. Furthermore, we typically ignore our own behaviour in our self-assessment, but rely on it the most when assessing others. This stands to reason, as we cannot introspect the minds of others. Moreover, Pronin (ibid.) states that we believe our own introspections to be reliable and valid even when this is not the case. Similar to this is the consistent finding that we see ourselves as less biased than others, in that we are likely to introspect biased thoughts, and we simply may not be aware of the extent such biases have on our perception and cognition (Pronin and Kugler 2007).

Kahneman and Tversky (1972) explore the nature of cognitive biases and provide consistent and replicable evidence of humans making decisions and judgements that differ, sometimes greatly, from rational choice. Examples of bias can be seen in consistency bias (recalling past attitudes and behaviours as being similar to those of the contemporary), in-group/out-group bias (in-groups are more diverse and interesting while out-groups are homogenous and rely on arbitrary definitions and stereotypes). In addition to this, there have been concerns surrounding the validity and reliability of introspection of cognitive processes in everyday events and unusual circumstances (White 1988). This may prove damaging to research set-ups of low ecological validity where participants find themselves in strange and unfamiliar situations or using certain tools and equipment they are unused to, e.g. a headmounted eye tracker.

5


Of further concern is the finding that even when introspections are uninformative for the purposes of the task at hand, participants still give confident descriptions of their mental processes, or in other words they are not aware of their own unawareness (Wilson and Bar-Anan 2008). This aspect alone represents an extreme danger to research designs where the participant may have nothing to say about an aspect of the study and simply ‘makes up’ something to end the session, or fulfil what they perceive to be the objectives of the research.. This can happen if the researcher is known to the participant or has an influence over them, as might happen in a studentteacher relationship. The latter presents an issue of research ethics, which can be treated inconsistently (Angell et al. 2007) across institutions and domains, and may also not be explicitly reported in dissemination of the research.

In translation process research, Krings was one of the early adopters of the TAP method (e.g. Krings 1986). Detailed accounts of the use of TAPs in translation process research can be found elsewhere and are beyond the scope of the current paper (see Jääskeläinen 2002, Tirkkonen-Condit 2002). Jakobsen (2003: 78-79), for example, found that the use of concurrent TAPs significantly increased the time it takes to produce a translation. He observed notable differences between professional and student translators, whereby the former made far fewer verbalisations, which suggests that the process of translation, or at least sub-processes thereof, have become automated in professionals and are therefore not available for introspection and consequently for verbalisation.

6


3. TAPs as a Method of Data Elicitation

This section presents a summary of the strengths and weaknesses of TAPs in a variety of research from the domains touched on in the previous section. The section is divided into three parts: strengths, weaknesses, and additional factors.

Strengths

1) TAPs can be said to be a valid measure of human experience which, is of course, subjective. If the intent of the research is to capture existentialist reflections, the TAP method provides a unique means of doing so. For areas such as usability testing, such rich data can be seen very positively, and it can help improve user interfaces. For many scholars, the ability to capture such rich qualitative data is a very attractive aspect of TAP methods.

2) As a research method, TAPs can be extremely resource-cheap and portable. Basic instruments may consist of the necessary materials for the task in question, and a means of recording or capturing the data for later analysis, e.g. a tape recorder.

3) For participants, a TAP study can provide interesting and practical findings about the participants’ own behaviour, and could be used to assist them in self-development, enabling them, for example, to complete a process more efficiently, or learn how to perform a task for the first time, by viewing recordings of experts performing the task and listening to/reading their verbalisations about important aspects. The value of

7


self-reflection in such tasks may be of great value to certain groups of participants (Bartels 2008), such as student translators, or learners of skilled tasks etc.

Weaknesses

1) Some scholars argue that the reductionist and behaviourist approaches implicit in the TAP method do not offer valid measures of human experience since it simply cannot be reduced to verbalisations and/or overt behaviour (Nielsen et al. 2002). Similarly, it has been argued that during task processing participant behaviour does not necessarily equate to or adequately reflect higher cognitive processing (Whiteside et al. 1993).

2) As demonstrated by several studies (e.g. Fleck and Weisberg 2004, Guan et al. 2006), the content verbalised by participants may be subject to fabrication or forgetting, the latter is especially problematic in retrospective designs. From their investigations Russo et al. conclude that “retrospective protocols yielded substantial forgetting or fabrication in all tasks” thus “supporting the consensus on the nonveridicality of these methods” (1989: 759). This forgetting can be attributed to the retrospective method tapping into the long-term memory store, which can be dangerously erroneous (Ball et al. 2006).

3) There are additional incompatibilities between TAP and widely validated models and concepts of cognitive processing. One incompatibility is due to the limitations of short-term memory which stores the content that is verbalised. If indeed the content is drawn from this memory store, it cannot be accurate as, although the contents of more 8


than one process can be held in short-term memory, the verbalisation in concurrent TAPs cannot run independently of the task being reported on, thus resulting in disruption, manipulation (Eysenck and Keane 2010) or restructuring (Fleck and Weisberg 2004).

4) Similarly, implicit knowledge is of concern as it may not be available to verbalise, however, such implicit knowledge may be central to the performance of a task. The argued implicit-explicit nature of knowledge representation is akin to the concept of conscious-unconscious cognitive processing. It presents an issue with regard to the lack of availability of cognitive and meta-cognitive information to the participant during the TAP (Eger et al. 2007). Broadbent et al. (1986) state that implicit knowledge is often non-verbal and can therefore be difficult, if it is at all possible, to articulate.

5) Related to the previous point, the issue of automaticity of tasks is of further concern and may explain why information is not available for verbalisation. As described in Jakobsen’s experiment (2003): differences between student translators and professionals were evident, where it is believed that as the latter group had much more experience with the task, and consequently the processing became automated and was therefore not available to verbalisation. Similarly, the inexperienced student group verbalised more, and processed smaller ‘chunks’ at a time, i.e. they were not as accustomed to the task of translation to the extent of the professionals. Lastly, as cognitive processing is much faster than verbal processing, and the latter is a subprocess of the overall higher cognitive framework (Jakobsen 2003), additional confounding factors arise.

9


6) Correspondingly, concurrent TAPs, where there is intervention on the part of the researcher, represent disruption to the task processing, especially for participants who are already using significant cognitive resources to perform the task (Preece et al. 1994). The additional burden of verbalisation during task processing may be too much for certain tasks and result in possibly unknown effects on the nature and content of the verbalisations and indeed on the task processing itself (Gile 1998). For example, translation and concurrent verbalisation may reduce the number of translation units or ‘chunks’ in terms of words, phrases etc. that the participant can process, e.g. student translators (Jakobsen 2003).

7) The design of the TAP may also have an effect on the content and nature of the verbalisations. Bartels (2008) argues that concurrent designs bias the first impressions of the task made by the participants, while retrospective designs run the risk of forgetting such first impressions.

8) Finally, and not unique to the TAP method, data captured may be situationdependent and especially prone to environmental factors such as the participant sitting in a research lab, consequently feeling uneasy and perhaps not willing to verbalise or curtailing their verbalisations as a result.

Additional Factors

1) The subjective nature of verbalisations allows for the presence of considerable individual differences, which, depending on the nature and aims of the research, can be positive or negative. It has been demonstrated that participants verbalise in similar 10


ways across tasks; for instance, a participant who produces a long verbalisation on one task, did so for others (Gilhooly 1987). However, such traits were found not to correlate with personality traits or measures of verbal fluency (ibid.).

2) Moderation, as discussed several times above, can lend itself to fruitful results depending on the aims of the study. On the other hand, it can prove damaging to validity. In active moderation the researcher may, during the study, focus on areas of interest related to the attention and behaviour of the participant to uncover specific and otherwise hidden information. In such cases, researchers should have preset neutral questions to keep bias to a minimum. In inactive moderation, the researcher may, for example, prompt only when the participant is silent for a set period of time. However, perhaps such silence is a finding in itself, and prompting may make the participant verbalise something for the sake of doing so, or confabulate to proceed or finish the task. When even the slightest amount of active moderation is present, the research should acknowledge this and attempt to account for the effects of the increased level of interaction with the participant (Tamler 2001).

Nielsen (1993) argues that a pragmatic approach should be taken in studies adopting TAP methods, especially in usability studies, in that moderation should be kept as inactive as possible, yet a direction should be provided if necessary to address the aims of the study. As usability studies are often driven by commercial needs, issues of validity, reliability, and generalisability may not be of as much concern to the researcher, or indeed, the funder, as to the academic community. In a sense, such studies are searching for what the participant finds to be of interest or concern;

11


therefore, it could be argued that ‘the ends justify the means’ in such an implementation of a TAP method.

3) The relationship between the participant and the researcher may also present additional issues in that the participant may think or feel that he/she should say or think a certain way to fulfil the wishes of the researcher. Sampling is also a fundamental aspect of this potential bias in that factors such as whether participation is voluntary and whether participants know the researcher, for example, may affect task performance, verbalisations and data in general.

4. Recommendations

From the review of the literature and the categorisation of findings on TAPs into strengths and weaknesses, it is argued here that the weaknesses of the TAP method, when used alone, greatly outweigh its strengths. Such is the case for certain fields, where TAPs are not typically seen to be a valid/reliable method of data elicitation unless they are strictly operationalised in an ‘objective’ and clearly measurable way, e.g. cognitive psychology and behavioural sciences.

Rosenthal (2000) concludes that any form of introspection does not accurately represent the mental states of interest, nor does it provide valid insight into concurrent states in any situation. It therefore requires additional supplementary methods to be valid and reliable. This draws from Lashley (1958), who states that introspection only makes the results of mental processes accessible and explicit, while the processing itself remains implicit and inaccessible. 12


However, the potential strengths for uncovering unexpected phenomena should not be overlooked, especially in research concerning humans and/or relating to human experiences, e.g. in usability and evaluation studies. More objective methods, such as brain imaging have validated the use of retrospective TAPs: Klasen et al. (2008), for example, support the use of TAPs in conjunction with functional magnetic resonance imaging (fMRI) in video games research. Such a need to supplement TAPs with more objective measures is widely found across domains in the literature (e.g. Kaakinen and Hyönä 2005).

Some researchers have thus sought to overcome the shortcomings of TAPs by using a mixed-method design. For instance, triangulating (qualitative) data elicited via TAPs with data elicited in other ways, especially using quantitative methods such as eye tracking. While it may be apparent that there is no perfect method for research involving human experiences and opinions etc., such a combination of methods allows researchers to compensate somewhat for each individual method’s shortcomings while retaining its unique richness.

An important exercise in adopting the TAP method, or indeed when using it in conjunction with other methods, is to identify the limitations and possible confounding factors resulting from the research design. For example, the coding of verbalisations (e.g. Doherty and O’Brien 2009) arguably fulfils the requirements of a qualitative approach, yet when data are analysed quantitatively and especially with other (more) objective methods, it may pose questions as to the reliability and validity of findings due to the arbitrary and subjective nature of the decisions made to code and categorise verbalisations and/or themes.

13


Similarly, participant training for TAP studies remains a concern in that for many participants verbalising may be a new and uncomfortable experience, therefore, utterances may not truly represent the results of their introspections. On the other hand, however, by providing participants with sample (written or pre-recorded) verbalisations, the researcher may sway the results by ‘priming’ participants to conform to these set examples. Such issues are especially concerning in studies carried out in different languages, or where the working language is not the participant’s native tongue (e.g. Jarodzka et al. 2010).

In light of such misgivings about the validity of data derived from TAPs, scholars have used them in conjunction with other (more) objective measures of cognitive processes to overcome the shortcomings of TAPs by supplementing them with an additional source of data derived simultaneously from the same processes and tasks.

An example of this can be seen in eye tracking studies where retrospective TAPs are combined with a playback or gaze replay from an eye-tracking recording as a form of cued recall allowing participants to view their eye behaviour during the task and provide retrospective commentary. This method has been shown to be a successful way to elicit rich qualitative information that is supported quantitatively (e.g. Van Gog et al. 2005, Ball et al. 2006, Eger et al. 2007, Doherty et al. 2010). Participants in such studies have commented on the usefulness of the playback of, e.g. a video recording (Guan et al. 2006) as a reminder of what they did during the task and as a way to highlight aspects of the task they were unaware they attended to or that they had missed entirely. Thus the participants in the study reported on in Doherty and

14


O’Brien (2009) noticed errors in text they had read earlier only when the video that recorded their eye movements was played back to them.

Overall, it can be concluded that explorations via TAPs indeed offer interesting and fruitful findings as a research method. However, from the works reviewed here, it can be concluded that by combining this method with others, namely quantitative methods, the potential use of TAPs as a research method is strengthened greatly.

15


References

Angell, E. L., Jackson, C. J., Ashcroft, R. E., Bryman, A., Windridge, K., and DixonWoods, M. (2007). Is ‘inconsistency’ in research ethics committee decision-making really a problem? An empirical investigation and reflection. Clinical Ethics, 2, pp. 9299.

Bartels, M. (2008). The objective interview: Using eye movement to capture precognitive reactions. Qualitative Research Consultants Views, Spring 2008, pp. 58-61.

Ball, L. J., Eger, N., Stevens, R. and Dodd, J. (2006). Applying the PEEP method in usability testing. Interfaces, 67, pp. 15-19.

Broadbent, D. E., Fitzgerald, P., and Broadbent, M. H. P. (1986). Implicit and explicit knowledge in the control of complex systems. British Journal of Psychology, 77, pp. 33-50.

Doherty, S. and O'Brien, S. (2009). Can MT output be evaluated through eye tracking? MT Summit XII: Proceedings of the Twelfth Machine Translation Summit, Ottawa, Ontario, Canada, pp. 214-221.

Dumas, J. S. and Redish, J. C. (1993). A Practical Guide to Usability Testing. Norwood, NJ: Ablex Publishing Corp.

Eger, N., Ball, L. J., Stevens, R. and Dodd, J. (2007). Cueing Retrospective Verbal 16


Reports in Usability Testing Through Eye-Movement Replay. Proceedings of HCI.

Ericsson, K. A. and Simon, H.A. (1993). Protocol analysis: Verbal reports as data (Rev. ed). Cambridge, MA: MIT Press.

Eysenck, M. W. and Keane, M. T. (2010). Cognitive Psychology: A Student's Handbook (6th ed.). East Sussek and New York: Psychology Press.

Fleck, J. I. and Weisberg, R. W. (2004). The use of verbal protocols as data: An analysis of insight in the candle problem. Memory and Cognition, 32, 6, pp. 9901006.

Gerganov, A., Kaiser, V., Braunstein, V., Popivanov, I., Brunner, C., Neuper, C. and Stamenov, M. (2008). Priming bilingual brain with correct and incongruent translations of true and false cognates in English-German during a translation task: An EEG and eye tracking study. Ghent Workshop on Bilingualism, Ghent, Belgium.

Gile, D. (1998). Observational studies and experimental studies in the investigation of conference interpreting. Target, 10 (1), pp. 69-93.

Gilhooly, K. J. (1987). Individual differences in thinking-aloud performance. Current Psychology, 5, pp. 328-334.

Guan, Z., Lee, S., Cuddihy, E. and Ramey, J. (2006). The Validity of the Stimulated Retrospective Think-Aloud Method as Measured by Eye-Tracking. CHI, Montreal,

17


Canada.

Hannu, K., and Pallab, P. (2000). A comparison of concurrent and retrospective verbal protocol analysis. American Journal of Psychology, 113 (3), pp. 387–404.

Jääskeläinen, R. (2002). Think-aloud protocol studies into translation: an annotated bibliography. Target 14, 1, pp. 107-136.

Jakobsen, A. L. (2003). Effects of think aloud on translation speed, revision and segmentation. In: Alves, F. (ed.). Triangulating Translation. Perspectives in Process Oriented Research. Amsterdam: John Benjamins, pp. 69-95.

Jakobsen, A. L. (2006). Research methods in translation: Translog. In: Sullivan, K. P. H. and Lindren, E. (eds.). Computer Keystroke Logging and Writing. Amsterdam, Elsevier, pp. 95-105.

Jarodzka, H., Scheiter, K., Gerjets, P., and van Gog, T. (2010). In the eyes of the beholder: How experts and novices interpret dynamic stimuli. Learning and Instruction, 20, 146-154.

Johansson, P., Hall, L., Sikström, S., Tärning, B. and Lind, A. (2006). How something can be said about telling more than we can know: On choice blindness and introspection. Consciousness and Cognition, 15 (4), pp. 673-692.

18


Kahneman, D., and Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3, pp. 430-454.

Kaakinen, J. K. and Hyönä, J. (2005). Perspective effects on expository text comprehension: Evidence from think-aloud protocols, eye tracking and recall. Discourse Processes, 40, pp. 239-257.

Klasen, M., Zvyagintsev, M., Weber, R., Mathiak, K. A., and Mathiak, K. (2008). Think Aloud during fMRI: Neuronal Correlates of Subjective Experience in Video Games. Fun and Games, Lecture Notes in Cognitive Science, 5294, pp. 132-138.

Krings, H. P. (1986). Was in den Köpfen von Übersetzern vorgeht, Tübingen: Gunter Narr.

Lashley, K. S. Cerebral organization and behavior. (1958). In The brain and human behavior, proceedings of the association for research on nervous and mental disease. Baltimore: Williams and Wilkins, no page numbers.

Metcalfe, J., and Shimamura, A. P. (1994). Metacognition: knowing about knowing. Cambridge, MA: MIT Press.

Nielsen, J. (1993). Usability Engineering. Chestnut Hill, MA: Academic Press, Inc.

Nielsen, J., Clemmensen, T., and Yssing, C. (2002). Getting access to what goes on in people’s heads? – Reflections on the think-aloud technique. In: Proceedings of the

19


second Nordic conference on Human-computer Interaction, NordiCHI, Arhus, Denmark, October 19-23, pp. 101-110.

Nisbett, R. E., Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review 8, pp. 231–259.

O'Brien, S. (2006). Eye Tracking and Translation Memory Matches. Perspectives Studies In Translatology, 14, 3, pp. 185-205.

Pronin, E. (2007). Perception and misperception of bias in human judgment. Trends in Cognitive Sciences 11 (1), pp. 37–43.

Pronin, E. and Kugler, M. B. (2007). Valuing thoughts, ignoring behaviour: The introspection illusion as a source of the bias blind spot. Journal of Experimental Social Psychology 43, pp. 565–578.

Russo, J. E., Johnson, E. J. and Stephens, D. L. (1989). The validity of verbal protocols. Memory & Cognition, 17, pp. 759 –769.

Tamler, H. (2001). How (Much) to Intervene in a Usability Testing Session. In: Design by People For People: Essays on Usability. New York: UPA, pp 165-171.

Tirkkonen-Condit, S. (2002). Process research: State of the art and where to go next? Across Languages and Cultures, 3 (1), pp. 5-19.

20


White, P. A. (1988). Knowing more about what we can tell: Introspective access and causal report accuracy 10 years later. British Journal of Psychology79 (1), pp. 13–45.

Whiteside, J., Bennett, J.L., and Holtzblatt, K. (1993). Usability Engineering: Our Experience and Evolution. In: Handbook of Human Computer Interaction. Helander, M. (ed.). New York, NY: Elsevier Science Publishers.

Wilson, T. D. and Bar-Anan, Y. (2008). The Unseen Mind. Science, 321 (5892), pp. 1046–1047.

Van Gog, T., Paas, F., van Merrienboer, J. J. G. and Witte, P. (2005). Uncovering the Problem-Solving Process: Cued Retrospective Reporting Versus Concurrent and Retrospective Reporting. Journal of Experimental Psychology: Applied, 11(4), 237244.

21

Profile for Jennifer Bruen

Exploring the Cognitive Elements of Think-Aloud Protocols  

Think Aloud Protocols (TAPs) have been adopted as a data elicitation method in cognitive psychology, usability/human-computer interaction, a...

Exploring the Cognitive Elements of Think-Aloud Protocols  

Think Aloud Protocols (TAPs) have been adopted as a data elicitation method in cognitive psychology, usability/human-computer interaction, a...

Advertisement