Understanding Expert and Novice Meaning

advertisement
Understanding Expert and Novice Meaning-Making from Global Data
Visualizations
Comprendre comment les experts et les novices donnent sens à des
données visuelles globales
Kathryn Stofer, Graduate Research Assistant, Oregon State University, USA
Summary
Scientists often create visualizations with cultural conventions such that novices, who
lack the extensive training of professionals, cannot make meaning from them in the same way as
experts. This research addresses the question of how scientists and novices analyze global data
visualizations and how they use scaffolding, that is, supporting details or labels added to the
images to clarify the meaning of the data. The project uses multiple methodologies from
education and neuroscience to address questions of how people make meaning.
Previous work shows changing culturally “scientific” color scales and measurement units
to more broadly culturally-relevant colors and units improved comprehension of the overall
scientific meaning for both teachers and science center visitors. Adding geographic labels,
borders, and legends based on the ways users naturally read or scan pages make the areas
represented in an image more immediately recognizable. Perceptually, color scales built to work
with, rather than against the human visual system also could facilitate meaning-making.
Clinical interviews with subjects elicited areas of confusion and differences in meaningmaking between experts (n=12) and novices (n=18) that subjects can articulate. Experts suggest
that much of their knowledge relating to the imagery was learned no earlier than graduate school.
Eye-tracking indicate differences in non-conscious attention to features in visualizations. Pilot
studies reveal experts use the color bar more readily and novices and experts have qualitatively
different patterns of viewing, with novices concentrating more on areas around North America.
Especially in out-of-school settings, the less time and energy users expend comprehending the
basic features of what is depicted and where it is, the more effort they can spend recognizing and
evaluating patterns in the data.
Keywords
Data visualization, eye-tracking, expert/novice, museums, education, free-choice learning
donnéesvisualization,
L'oculométrie,
expert/novice,
musées,
l'éducation,de
librechoixd'apprentissage
Résumé
Les scientifiques utilisent souvent pour leurs visualisations des conventions culturelles
qui sont peu perméables aux novices manquant l'entrainement poussé des professionnels. Leur
compréhension sera alors différente de celle des experts. Cette étude pose la question de la
manière dont les scientifiques et les novices analysent les visualisations de données globales et
comment ils utilisent un échafaudage, c'est à dire des détails ou labels ajoutés à l'image pour
clarifier la signification des données. Le projet utilise de multiples méthodologies provenant de
l'éducation et des neurosciences pour poser la question de l'élaboration du sens et de la
compréhension par les sujets.
Des travaux précédents ont montré que le remplacement d'échelles de couleur et unités de
mesure issues de la culture scientifique par des couleurs et unités plus largement répandues dans
la culture populaire augmentent la compréhension des données scientifiques par les professeurs et
les visiteurs des centres scientifiques. Ajouter des labels géographiques, des frontières et des
légendes basées sur la façon dont les utilisateurs les lisent ou analysent naturellement rendent les
zones représentées sur une image plus rapidement reconnaissables. Des échelles de couleur créées
pour travailler avec, plutôt que contre, le système visuel humains facilitent également la
perception et la compréhension.
Des entretiens cliniques avec les sujets ont permis de distinguer des zones de confusion et
des différences exprimées entre les experts (n=12) et les novices (n=18) dans l'élaboration du
sens. Les experts suggèrent qu'une grande partie de leurs connaissances relatives à l'imagerie n'a
pas été acquise avant le niveau Master à l'université. L'oculométrie indique des différences dans
l'attention inconsciente aux détails des visualisations. Des études pilotes révèlent que les experts
utilisent la barre de couleurs plus facilement, et que les novices et experts ont des schémas de
visualisation qualitativement différents, les novices se concentrant plus sur les zones proches de
l'Amérique du Nord. En particulier en dehors du contexte scolaire, plus le temps et l'énergie
dépensés par l'utilisateur pour comprendre les caractéristiques de base de ce qui est représenté et
sa localisation sont réduits, plus celui-ci pourra concentrer son effort sur la reconnaissance et
l'évaluation de l'organisation des données.
Accessing scientific meaning in visualizations of ocean data often requires specialized
training. This leaves even so-called “educated” adults rather in the dark when trying to make
scientific meaning from these images that accompany popular media stories. The constructivist
position makes explicit the nature of science as provisional rather than absolute[1] and makes the
divide between trained, enculturated experts and untrained, external novices more than just a
matter of a lack of knowledge. It is instead a lack of enculturation that would enable one to be
able to contextualize and make scientific (cultural) meaning of the knowledge that science
develops.
For example, one type of visualization shows data as gradations of color on a map,
revealing patterns to the eye that matrices of numbers obscure. The visualizations illuminate
Earth's processes to scientists and are supposed to communicate research findings. Scientists can
use the visualizations to judge the scientific value of the data, grasp the current state of
understanding, and apply new knowledge by supporting laws, politicians, and businesses.
However, many non-scientists cannot make sense of unscaffolded visualizations. Therefore,
understanding how users acquire information from the images and how image-makers facilitate
optimal communication is of fundamental importance to both science and science education.
In this research, we examine and compare how scientists with this training and untrained
novices bring various skills and knowledge to bear to make meaning from these visualizations.
We test several types of scaffolding supports in the images to determine whether those bring
meaning-making by the public closer to that of scientists.
Expert scientists often create visualizations with such complexity that novices, who are
unfamiliar with cultural conventions of trained professionals, do not make the same meaning or
extract the same information from the images as the experts do. Two features of images used in
scientific journals can be altered to enable novices to make meaning from the scientific findings
expressed in the images[2]. Changing unfamiliar “scientific” color scales, such as temperatures
depicted with a rainbow spectrum, to more culturally relevant reds and blues, vastly improved
comprehension of the overall meaning for both teachers and science center visitors. Furthermore,
adding geographic labels, borders, and legends based on the ways users naturally read or scan
pages make the areas represented in an image more immediately recognizable. The less time and
energy users expend comprehending the basic features of what is depicted and where it is, the
more effort they can spend recognizing patterns in the data that illustrate processes and evaluating
those processes in a broader context[3].
Conceptual Framework
This study draws on more than one tradition in the social sciences for understanding
meaning making, learning, and science literacy. I will be engaging in a type of bricolage[4],
bringing together what otherwise might be considered disparate or even non-compatible materials
in one artistic or cultural production. I draw components of several epistemologies into a new
combination to explain our relation to the world, a strategy suggested by a growing number of
methodologists[5], [6]. By using the same types of images across methods in these three
paradigms, I will triangulate the results and elucidate ways to allow findings from each approach
to shed light on findings from the other approaches attempting to answer the same questions.
Social Constructivism
Constructivism suggests that the individual constructs her own knowledge and reality
based on personal experiences, as opposed to discovering an objective world[7]. Prior knowledge
and experience shape what the individual considers relevant from the ongoing stream of sensory
input. Then individuals either assimilate new knowledge into their existing frameworks, or alter
those existing frameworks to accommodate both the prior and current knowledge and
experiences.
Social constructivism postulates the individual builds a mental representation of the
environment based not only on internal knowledge, but also on external affordances and
constraints. When learning science, it is not enough to put novices into situations where they will
“discover” science. They also need to be given access to experiences, concepts, and models of
science, assisted in the construction of their personal models through social mediation, and
enculturated into science through help using their scientific understandings, especially where their
constructed scientific understandings apply[1].
Socioculturalism
From a sociocultural perspective, learning occurs in the zone of proximal
development[8], that is, where the material or task to be learned is neither too close to nor too far
from the learner's current understanding. Mastery of the appropriately-challenging material is
scaffolded for the learner by a "more knowledgeable other" who guides the learner by offering
supporting and breaking the task to be mastered into smaller steps.
Scaffolding, that is, aiding a learner in completing a task that would otherwise be beyond
his or her current capability, involves more than simple guided assistance or modelling and
imitation[9]. Instead, scaffolding removes elements of the tasks that the learner could not
complete, allowing him or her to focus on that which he/she can do.
In the case of visualizations, when educators can't be present, the visualizations
themselves can contain material in such a way as to scaffold the expert information to a more
novice level. Some of the tasks that prove difficult for non-scientists are: interpreting rainbow
color scales, interpreting legends with unfamiliar scale units, and orienting geographically within
geo-visualizations[2]. This project explores how and why those work as well as investigate the
various levels of scaffolding to understand which interventions or combinations prove most
useful, or which elements of the expert-level task are more difficult than others.
Cognitive Psychology and Neuroscience
Geographers and others researching and developing geo-visualizations understand that
elucidating those areas of human perception, cognition and visual processing that impact use of
visual communication tools such as maps and data-filled graphics is a key challenge in order to
move the discipline forward[10]. Color is again very relevant when viewed through this lens. For
example, the rainbow color scheme favored by many scientific visualizers represents middle
values as yellows and highs as reds, but yellow-green is perceptually brightest to the human
eye[11], so the middle range data often ends up “standing out” when it is not statistically so.
Many neuroscientists have claimed to produce results that can inform educational
practice, but few have evinced practical ways of doing so[12].Though efforts have been made to
examine more higher-order tasks[13], [14], there still exists a dearth of appropriate real-world
behavioral tasks that could be tested both in the social/psychological realm and in the
neuroscience realm. In the case of map use, a real-world task involving either mental or physical
spatial rotation requires different subskills than commonly-investigated psychological tasks of
rotation of abstract geometric shapes[15]. This study tests the same real-world questions in
multiple realms.
Methods
Non-color-blind adult oceanography experts (PhD holders with five years of experience,
n=12) are compared with adult novices (18 years old with less than two years of college
coursework completed, non-science majors, n=18) while making meaning from global ocean data
visualizations in two experiments: clinical interviews [16], [17] and eye tracking. A sub-sample
of high- and low-performers (n=5 each from novice population), and 5 experts chosen at random
from the clinical interviews will be participants in the eye tracking experiment. Participants are
paid for participation in each phase of the study.
Imagery
Three baseline "scientific" images, derived from satellite data averaged over a month, are
used in the various stages of this study. Two are global versions of topics used in previous
research[2]; the third is a related global image: sea surface temperature anomaly. Five images
were created for each topic: one with no scaffolding, three each with one element of scaffolding
(color, geographic labels, or title), and one with all three elements of scaffolding.
Unscaffolded image:
“Culturally-relevant colors” layer of scaffolding:
All three types of scaffolding – Colors, Geographic Labels, and Title/Scale Bar
Clinical Interviews
Clinical interviews present subjects with two of the three sets of images. Images are
randomly selected for participants, so that 6 novices and 4 experts look at each possible pair of
sets. Order of presentation of the sets is also randomized. Finally, within each set, order of
presentation is randomized to prevent fatigue and/or learning effects. Each subject is shown 10
images: 2 unscaffolded, 6 with 1 layer of scaffolding, and 2 with all three layers of scaffolding.
Interviews were semi-structured, asking several content questions and/or tasks followed
by probes of “How do you know?” to exhaustion. Content concerned main idea of the images,
meaning of the colors, time of year depicted, location of the equator, area depicting most extreme
values, and measurement units.
Interviews are audio and video recorded, and will be transcribed and coded using
qualitative coding measures [6].
Eye-tracking
The eye-tracking experiment will use the single set of 5 scaffolded images not used for
the clinical interviews of each subject. In addition, variations in the month representing different
seasons of the year will be presented due to recognition experienced in the clinical interviews.
Subjects sit in front of a computer screen with a SMI Systems RED-TM eye-tracking device on the
tabletop underneath the screen. Following eye position calibrated using standard SMI calibration
procedures, subjects are presented with the images and asked four of the questions from the
clinical interviews concerning content while eye position is tracked.
Results
Expert subjects to date typically answer almost all questions correctly for the first version
of any image shown, then recognize the second version as simply a different display of the same
data. Because they also spoke at great length about the answers, they have not typically been
shown all the images, but rather enough to show them at least one without color scaffolding and
one with, until they indicate recognition of the same dataset. Experts also indicated a high level of
knowledge from both graduate studies and professional practice, such as going to seminars.
Novices struggle to answer content questions and do not always use all available image elements
such as title and scale bar, as evidenced by a subject who said she noticed the scale bar several
questions into the interview. Novice correctness on various images will point to which scaffolds
lend most assistance for scientific meaning-making.
Eye tracking pilot data confirms the different patterns of image use, with novices
focusing on a more limited area of the image and generally relying heavily on North America.
Statistical tests will reveal whether the dwell times on each feature differ significantly as well,
pointing to salient features, and effective and ineffective scaffolds when matched with correct
answers. Response time and fluency may point to whether experts have automated processes with
which novices still struggle, or simply perform the tasks faster.
Significance
First and foremost, understanding which particular features of images confuse and assist
users when making meaning from visualizations of scientific data will aid design of scaffolds and
improve access to scientific meaning for non-scientists.
Second, this research lends experimental confirmation and triangulation of data collected
previously in interviews with a more homogenous subject population. The triangulation should
shed further light on confusions and effectiveness of particular scaffolds in visualizations.
Finally, drawing on the social constructivist position could aid the connection between a
more purely sociocultural perspective and an individual, cognitive, neuroscience view of learning,
whose practitioners have for years been attempting to weigh in in a practical manner on learning
and education.
References
[1]
R. Driver, “Constructivist approaches to science teaching,” in Constructivism in
Education, Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1995.
[2]
M. Phipps and S. Rowe, “Seeing satellite data,” Public Understanding of Science, vol.
19, no. 3, pp. 311 –321, May 2010.
[3]
C. Ware, Information visualization: Perception for design. San Francisco, CA: Morgan
Kaufman Publishers, 2004.
[4]
C. Levi-Strauss, The savage mind. Chicago: The University of Chicago Press, 1966.
[5]
J. L. Kincheloe and K. S. Berry, Rigour and complexity in educational research:
Conceptualizing the bricolage. Berkshire, England: Open University Press, 2004.
[6]
M. Patton, Qualitative research & evaluation methods, 3rd ed. Thousand Oaks, Calif.;
London: Sage, 2002.
[7]
J. Piaget, “l’Épistémologie et ses variétés,” Encyclopédie de la Pléiade. Logique et
connaissance scientifique. Gallimard, Paris, 1967.
[8]
L. Vygotsky, Mind in society: The development of higher mental processes. Cambridge,
MA: Harvard University Press, 1978.
[9]
D. Wood, J. S. Bruner, and G. Ross, “The role of tutoring in problem solving,” Journal
of Child Psychology and Psychiatry, vol. 17, no. 2, pp. 89–100, 1976.
[10]
A. M. MacEachren and M.-J. Kraak, “Research challenges in geovisualization,”
Cartography and Geographic Information Science, vol. 28, no. 1, pp. 3–12, 2001.
[11]
J. S. Faughn and R. A. Serway, College Physics, 6th ed. Canada: Thomson,
Brooks/Cole, 2003.
[12]
J. T. Bruer, “Points of View: On the Implications of Neuroscience Research for Science
Teaching and Learning: Are There Any?: A Skeptical Theme and Variations: The Primacy of
Psychology in the Science of Learning,” CBE Life Sci Educ, vol. 5, no. 2, pp. 104–110, 2006.
[13]
P. E. Compton, P. Grossenbacher, M. I. Posner, and D. M. Tucker, “A CognitiveAnatomical Approach to Attention in Lexical Access,” Journal of Cognitive Neuroscience, vol. 3,
no. 4, pp. 304–312, 1991.
[14]
Y. G. Abdullaev and M. I. Posner, “Time Course of Activating Brain Areas in
Generating Verbal Associations,” Psychological Science, vol. 8, no. 1, pp. 56–59, Jan. 1997.
[15]
A. K. Lobben, “Navigational Map Reading: Predicting Performance and Identifying
Relative Influence of Map-Related Abilities,” Annals of the Association of American
Geographers, vol. 97, no. 1, pp. 64–85, Mar. 2007.
[16]
G. J. Posner and W. A. Gertzog, “The clinical interview and the measurement of
conceptual change,” Sci. Ed., vol. 66, no. 2, pp. 195–209, 1982.
[17]
J. Piaget, The Child’s Conception of the World. New York: Harcourt, 1929.
Download