Evaluating Learning in Virtual Environments

M. A. Syverson
Division of Rhetoric and Composition
University of Texas at Austin
Austin, TX 78712-1122
Phone: 512.471.8734 FAX: 512.471.4353
email: syverson@uts.cc.utexas.edu

John Slatin
Division of Rhetoric and Composition
University of Texas at Austin
Austin, TX 78712-1122
Phone: 512.471.8743 FAX: 512.471.4353
email jslatin@mail.utexas.edu

Note: This document describes a project funded through the Computer Education and Training Initiative sponsored by DARPA for 1995-1997.


We can provide meaningful, qualitative methods of evaluation for diverse kinds of learning occurring in virtual environments. These methods will integrate emergent technologies, human resources, and current research in learning and development. The theoretical aspect of the project is based on pioneering work on authentic assessment by Myra Barrs and others, as well as the program linking classroom evaluation with large-scale program assessment developed by Mary Barr and colleagues of the Learning Record working in collaboration with Myra Barrs. Our role is to extend their work, which has been extensively field-tested in conventional learning environments, into online learning environments constructed in virtual spaces such as MOOs and MUDs. We will develop tools and resources for teachers and learners to support authentic assessment in these environments. The practical aspect of the work will be grounded in the ongoing teaching and research in the University of Texas Computer Writing and Research Labs and the Undergraduate Writing Center, where learning environments based on MOOs and MUDs are already well-established. The project is itself innovative in the way that it integrates three key functions--research on learning and development, classroom practice, and authentic assessment of both individuals and programs--in learning environments made possible via new technologies.

Statement of Work

This project focuses on transporting, adapting, and expanding a highly successful model of authentic assessment for conventional learning environments into virtual environments such as MOOs and MUDs. These virtual environments both constrain and enable learning and its evaluation (as do conventional classrooms). Our work therefore, extends into three arenas: classroom instruction, research, and assessment. The first phase of the work focuses on classroom instruction with the development of resources for teachers. The research effort focuses on our inquiry into the kinds of learning that take place in the MOOs, its observable features, and its representation via teacher interpretations and samples of student work. The assessment part of the project focuses on authentic assessment that is valid and reliable enough to serve for both individual and program evaluation.

The Problem of Evaluation

New technologies present daunting challenges for educators. Software has moved rapidly beyond "drill and kill" programs to interactive simulations, hypermedia, and virtual reality explorations. Hardware in many classrooms is likely to include a bewildering array of peripherals such as modems, CD ROM, removable storage drives. Networks add still another layer of hardware, software, and communications complexity. Teachers must not only learn (and continue to learn) how to use these rapidly changing technologies themselves, but they must also rethink their teaching practices, design new activities for teaching and learning, and try to evaluate the learning of students as they engage those activities. How will learning in highly interactive, highly technological environments be measured and evaluated? It is clear that, helpful as they may be for routine bookkeeping tasks, even very intelligent agents cannot adequately account for human learning in complex systems involving people, machines, and networks in dynamic interaction. It is essential to develop systems that will support and legitimize the kinds of teaching and learning that we value in environments that are learner-centered, collaborative, authentic, and interactive. It is also essential that these systems relieve, rather than increase, the burden on teachers.

Tools for Evaluation

The goal of this project is to develop tools, both conceptual and technical, to support teachers and learners in interactive learning environments such as MOOs and MUDs. The principles guiding the development of these tools are context-independent, that is, they will have application regardless whether the interface is text only, graphical, or a mixture of text and graphics. The tools will take advantage of the database features of MOOs and MUDs to give learners and teachers a place, a framework, and a means to collect observations of learning activities as well as samples of student work in flexible, searchable formats. They will take advantage of the MOO and MUD capabilities to log the naturally occurring activities and interactions of learners as a key source of observations about student learning. They will also provide a means to incorporate relevant information and observations of other learning activities in classrooms or outside of school, including records of conventional performance assessments, grades, reading logs, samples of writing, interviews with parents, observations by resource teachers, and so on.

Portfolios in Virtual Spaces

The resulting online portfolio will give a multidimensional, developmental perspective on student learning over time in a variety of contexts and learning situations. Such a portfolio goes far beyond a teacher's gradebook, and also accounts for much greater diversity in teaching and learning styles, practices, and activities. These portfolios may include text, graphics (produced on the computer or scanned in), video, or audio evidence of student learning.

Advantages of Online Portfolios

Unlike conventional portfolios, these portfolios can be stored in very little space, viewed from remote locations, and duplicated many times at little cost. This is a distinct advantage for large-scale assessment, such as for program evaluations. Computer management of the access, arrangement, and viewing of this collection of materials will allow far more flexibility, specificity, and control over reports evaluating student learning and program effectiveness, without overly complicated recordkeeping by the teacher. In this way, statistically significant information could be reported numerically for administrative purposes, while narrative descriptions of learning activities, together with the teacher's observations, could be provided in responding to students for feedback on their own development, or in reporting to parents about students' progress. For example, we may be interested in finding out how many objects a learner has created in a MOO, or how many times the help files have been consulted--numerically significant questions; or we may want to know how several students worked together to create a room and its furnishings, or how a conflict broke out, questions which are qualitatively different and which do not lend themselves to statistical analysis.

Collecting Data About Learning

The tools we are developing are intended to support the collection of a wide range of data about student learning, the selection and interpretation of meaningful samples of this data, the linking of interpretations with supporting evidence, and the reporting of evaluations of student learning in useful formats. They can be implemented using technologies currently available, and expanded as faster and more powerful technologies permit. By simplifying the collection of data, their interpretation, storage, and communication, such tools have the potential to transform not only evaluation, but research and development across a wide range of inquiries into learning and development; they may also serve to transform instruction through reflective practice.

Evaluation of Learning In Moos And Muds

There are well established methodologies for evaluating learning in situations where the outcome is known in advance, or the path to the desired result can be predetermined. However, it is clear that once learners have gone beyond a fairly rudimentary level, they do not follow a well-mapped path to a predictable outcome. This is true in nearly every learning situation from childhood on. For example, we can generally predict the learning path of early readers based on observations of a large number of children. We can also predict the basic knowledge a pilot needs to get a plane off the ground, handle it in the air, and land it safely, all things being equal. Under real-life conditions, however, learning typically takes place in real time through interactions in which people bring their knowledge and experience to bear in rapidly changing environments, coordinating their activities with others. The paths to a successful outcome are less predictable, and the outcomes themselves are not always certain.

In these cases we are talking about emergent properties in learning situations, and the development of learners who can thrive in such environments. Because even if the configuration of cockpit and plane could remain completely stable, it is not possible to represent in the simulator learning situation all the complexity and uncertainty of real-world flying. Given the rapid rate of technological change, even the built environment cannot be economically modeled, nor can learners be re-immersed in simulations every time there is a modification or a new development.

We need to be able to account for learning that takes novel forms in unpredictable situations with uncertain outcomes. However, this does not mean that we must sit back and simply observe naturally occurring systems and hope the learners get something out of them. Rather, it means engaging learners in a wide array of activities in a richly represented environment, inviting them to construct and interact in explorations that will support such learning activities as independent decision-making, considering alternative possibilities for interpretation or action (looking for the "second right answer"), coordination with others, problem-solving, prediction, and analysis. Standardized assessment methods will never be able to capture the salient features of such open-ended learning situations.

Our experience with the Learning Record and with hypermedia and computer-mediated communication in network-based classrooms at the University of Texas demonstrates that it is possible to build successful methods of assessing emergent learning, in which interactions in a richly represented environment lead to understandings which did not exist prior to the interactions, and which persist beyond the immediate situation.

Comparison with Alternative Approaches

How does the proposed evaluation system, based on the Learning Record (CLR) model, compare with existing methods of measurement in assessing learning? There are three dominant models of evaluation and assessment besides the CLR in use today. The first model is the most familiar classroom-based assessment:


Grades indicate the degree to which learners met teachers' expectations for students in a particular course. This system allows for great diversity in classroom learning situations, since it does not dictate standard tasks and activities. However, it is extremely difficult to use grades as a basis for comparison across student populations, even between two classes at the same grade level and subject in the same school. Therefore, grades alone are not sufficient for determining the relative success of programs, departments, schools, or districts, for example. The system of classroom grades does reflect the teacher's proximity to the learner and the learning situation, but there are no assurances against bias, nor any substantive accountability for what is taught or learned, nor are there any real connections to current learning theory and research. Individual grades tend to obscure or discount students' and teachers' collaborative activities, to privilege products over activities, and to reinforce behaviorist assumptions about learning long discredited among learning theorists. Grades become the "reward" intended to motivate students to behave in certain ways or to punish them for their inability or unwillingness to perform as expected. Students and teachers gravitate towards safe, "school" assignments and responses.

Standardized assessments

As a counterpoint to the grading system, and to compensate for some of its limitations, the system of standardized assessment has emerged, including such tests as the PSAT, the SAT, the ACT, various state-mandated tests, and other test packages marketed by assessment publishers. The popularity and influence of standardized tests rests on the claims of test providers for their objectivity, reliability, and generalizability. Such claims are grounded by analogy with positivist paradigms of scientific research. However, these claims have for some time now been challenged by a large number of researchers and theorists in the field of learning and development. Claims of objectivity, for example, are viewed as seriously compromised by precisely what standardized test providers have regarded as a strong point: the removal of the test situation from authentic contexts of learning. Further, there has been a great deal of criticism about the persistent and systemic skewing in accounting for the learning of students from non-mainstream cultures. Reliability and generalizability claims have presumably assured generations of test-takers that the tests themselves are fair. A thermometer is an "objective" measure of body temperature; temperature by itself, however, reveals little about a person's physical condition. Similarly, standardized test scores may take students' academic temperatures while indicating very little of what they know and know how to do. This limitation is true not only for measuring understanding of complex concepts but even for students' grasp of the so-called basics.

The major claim for this kind of assessment is that comparability of the levels of performance can be shown across student populations. However, as Pamela Moss points out, reliability without validity is a meaningless concept. When comparing test scores across populations there is a great deal "dropped out" of the picture, or significantly misrepresented: How well can we represent the larger populations of recently-arrived immigrants who are not yet proficient in English? Where are the children in migrant families whose education may be repeatedly interrupted, and who may not even be in school on the day the assessment is given? What about children with disabilities, or with unconventional or unrecognized capabilities, or even the students with inspired or nontraditional teachers? As a result of these difficulties, many critics have become seriously concerned with the distorting effect of standardized assessments on teaching and learning situations.

Portfolio Assessment

Concerns about the problems with standardized assessment have fueled a search for more authentic and flexible methods of assessment. A major recent development is the portfolio assessment movement. Portfolios are collections of student work, often accompanied by student commentary on the collection, which attempt to reflect more of the context of learning over time. The flexibility in providing materials for a portfolio allows this form of assessment to accommodate a wider range of work, as well as accounting for the work of students from diverse cultural and linguistic backgrounds. Since the materials are typically produced in the course of normal classroom activities, portfolios provide more "authentic" assessment, that is assessment closely linked to and reflective of actual learning situations.

Portfolios and College Admissions

Increasingly, colleges are considering portfolios of student work as part of the admissions process. Portfolios can serve to supplement conventional measures such as GPAs and SAT scores which may not reflect important aspects of student development. In some cases, students who were not accepted at a particular college or university presented portfolios as evidence of their capabilities and were then accepted.

Potential Problems with Portfolios

Portfolios, however, present some challenges when it comes to assessment beyond the classroom. Because they are diverse compilations of materials, it is difficult to make any comparison across student populations, or to make well-supported interpretations about the effectiveness of programs, schools, or districts. It is very difficult to achieve consistent results among different readers. And it is difficult to make informed interpretations about the student's development over time. It can be expensive and time-consuming to train readers and to conduct the assessment itself, because of the volume of materials that must be sifted through. The portfolio assessment movement has attempted to address these difficulties by establishing some standard requirements for portfolios, either in terms of tasks and activities or products; however, the further we push in this direction, the closer we come to replicating the worst features of standardized assessment.

The Learning Record Model

The CLR is a model which incorporates the strengths of all of these kinds of learning assessment while also addressing their weaknesses. At the classroom level, the Records compile evidence of students' learning from multiple sources, including student writing in response to class activities, their observations and interpretations of their own learning, interviews with family members or other adults, and teachers' additional observations and interpretations of student learning. The Record is accumulated over time, yet it is not merely additive; materials are selected to be included where they provide evidence in support of interpretations about learning. At this point, it is a more sophisticated form of portfolio evaluation, and as such, can inform grades at the classroom level. However, to be useful as an assessment of learning, we must be able to take this richly documented perspective on student learning to a larger scale. We are able to accomplish this through a unique process of moderation readings, which serve to guard against subjective bias, establish comparability, and assure validity, without sacrificing any of the benefits of authenticity.

While the goal of both standardized testing and portfolio assessment is to eliminate, as much as possible, any participation by the teacher, the teacher's professional judgment is central to the CLR. The moderation process provides teachers with a powerful means of professional development, and also acknowledges teachers as a vital component of the assessment process.

The Moderation Process

Individual student placements are made at the classroom level, and then a statistical sample of records is chosen for the moderation process. The size of this sample can be varied as needs dictate, but need not exceed 20%, randomly selected, to provide an accurate representation. For moderation readings, student records are masked to conceal the student's identity, the teacher's name, and the name of the school, as well as the placement of the student on the developmental scales. The first round of moderations takes place at the school site. After a brief orientation in which teachers read several unmasked exemplars (selected from prior moderations) and discuss the developmental scales and the evidence which grounds interpretations, teachers begin to read the masked student records. Teachers read these records in pairs, reading only records other than their own, and discussing the evidence and interpretations found in the records. However, the original teacher or a representative of the teacher is present at the moderation to provide, on request, information about the learning environment that may not be evident in the record itself. Together each pair of readers decides on a placement on the developmental scales and records it. This choice is then also masked while the record is sent on to the next stage of the moderation process, typically conducted at the regional level. The procedures are repeated, with pairs of teachers reading and discussing records and recording their decisions about placement. Thus, each record selected for the moderation process receives three readings and three placements: from the original teacher, from the school-site moderation, and from the regional moderation. The CLR program office aggregates the numerical results and forwards them, together with copies of a sampling of records, with identities removed, as exemplars to the district, which then reports results to appropriate government agencies. The original records, with the readers' placements attached, are returned to the classroom teachers or their representatives, helping to fine-tune the teachers' judgments.

Critical Questions about Evaluation

Whatever model of assessment might be under consideration, there are some important questions to ask about our evaluation methods in complex learning environments:

  1. What kinds of learning are believed to be occurring in the system? What are some observable signs of these kinds of learning?

  2. By what means can learning be observed in this system?

  3. What kinds of activities and interactions does the environment under consideration support? How do these activities and interactions contribute to the learning we are evaluating?

  4. How many sources of information will actually be used to inform any evaluation or interpretations about learning? What methods will be used to gather the needed information from various sources?

  5. For what purposes will information about learning be gathered and analyzed? (There may be multiple purposes) What is the least intrusive method for getting the information we need? What theoretical framework will shape the interpretations?

  6. For what audiences will reporting or evaluation of students' learning be provided? What kinds of information does each audience need?

  7. What institutional support will be provided for the evaluation process (e.g., time, financial support, staff development, technological resources, human services, teaching assistance to cover teacher travel and participation in evaluation activities, and so on)

  8. What changes in learners, classroom practices, communities, and educational institutions are likely to result from the evaluation process?

  9. What are some foreseeable limitations, problems, or risks associated with the proposed evaluation process? How might these be forestalled, prevented, or accommodated?

  10. How well does the proposed evaluation process reflect educators' real goals and expectations for learners? How well does it reflect learners' own goals and expectations?

  11. What safeguards have ben provided to ensure equity, validity, authenticity, and reliability in the evaluation process?

  12. What does the proposed evaluation imply about the relationship between research, learning and teaching, and evaluation?

  13. What is the most reasonable and useful schedule for reporting results of evaluations?

  14. What format will best serve specific audiences and purposes? (i.e., some instances call for informal reporting in narrative summaries, in other cases quantifiable scores are desirable)

Our comparative research has led us to conclude that the principles and methods established by the Primary Language Record and further developed in the Learning Record best address these questions. A major reason is that these principles and methods allow us to account for learning situations which are diverse, complex, and emergent, and to acknowledge the multidimensional nature of learning.

Five Dimensions of Learning

Learning occurs occurs across complex dimensions which are interrelated and interdependent. Learning theorists including Lev Vygotsky, James Britton, and Myra Barrs have argued that learning and development is not an assembly-line which can be broken down into discrete steps occurring with machine-time precision, but an organic process that unfolds along a continuum according to its own pace and rhythm. The main problem to date has been in accounting for these dimensions of learning. When the school environment can be tightly controlled, it is possible to ignore or suppress the richness, complexity, and diversity of naturally-occurring learning. However, new technologies have generated new possibilities for constructing learning environments such as MOOs and MUDs which are radically different from traditional classrooms and which have irrevocably shattered the illusions of control by which schools have so far maintained their authority and credibility. We may expect a backlash against the use of new technologies in unconventional ways from conservative institutions unless we are able to provide evidence that we can account for learning in these unconventional environments.

It is possible, however, to demonstrate that our evaluation methods are valid, rigorous, and meaningful, and further, that they generate more useful information about student learning than conventional methods. The heart of this approach, pioneered by Myra Barrs and her colleagues at the Centre for Language in Primary Education, is observation of naturally-occurring activities by a professional educator (the teacher), combined with samples of work gathered in the course of normal activities (as compared with contrived "test" situations) and interviews with students and parents. The teacher (and student) is actively searching for, and documenting, positive evidence of student development across five dimensions: confidence and independence, skills and strategies, use of prior and emerging experience, knowledge in content areas, and critical reflection. These five dimensions cannot be "separated out" and treated individually; rather, they are dynamically interwoven and interdependent. Properly prepared records of achievement across these dimensions can be shared with a high degree of inter-reader reliability.

1. Confidence and independence

Confidence and independence are aspects of learning often underestimated, primarily because it is difficult to account for them using conventional measures. However, they are essential dimensions of learners' development which can be observed and interpreted over time. Evidence might come from a variety of sources: teachers' observations, from the student's reflections or from parents' interviews. We see growth and development when learner's confidence and independence become coordinated with their actual abilities and skills, content knowledge, use of experience, and reflectiveness about their own learning. It is not a simple case of "more (confidence and independence) is better." The overconfident student who has relied on faulty or underdeveloped skills and strategies learns to ask for help when facing an obstacle; the shy student begins to trust her own abilities and begins to work alone at times, or to insist on presenting her own point of view in discussion. In both cases, students develop along the dimension of confidence and independence.

2. Skills & Strategies

Skills and strategies represent the "know-how" aspect of learning. When we speak of "performance" or "mastery," we generally mean that learners have developed skills and strategies to function successfully in certain situations. Performing long-division, constructing a complete sentence in writing, decoding a written text and understanding its meaning, operating a microscope, or landing a plane are examples of skills and abilities. Educational evaluation has generally focused most of its attention on this aspect of learning, and therefore, we have the most evolved methods of evaluation in measuring skills and abilities as well as content knowledge. However, these two dimensions provide only a partial representation of learning.

3. Use of Prior and Emerging Experience

A crucial but often unrecognized dimension of learning is the ability to make use of prior experience as well as emerging experience in new situations. Once again, conventional methods are simply incapable of evaluating this aspect of learning. It is necessary to observe learners over a period of time while they engage in a variety of activities in order to account for the development of this important capability, which is at the heart of creative thinking and its application. In predetermined learning situations we cannot discover just how a learner's prior experience might be brought to bear to help scaffold new understandings, or how ongoing experience shapes the content knowledge or skills and strategies the learner is developing. To observe development of this capability, we must be actively searching for evidence of what the learner can do, rather than for evidence of deficits in learning, as is the case with conventional evaluations. MOOs and MUDs offer the potential to get a much better perspective on this dimension as students explore, construct, and interact with other participants.

4. Knowledge and Understanding

Knowledge and understanding is the most familiar dimension, focusing on the "know-what" aspect of learning. How many bones are in the human body? When did the Civil War start? What is a prime number? Who wrote Moby Dick? These are questions about content knowledge. Over time educational institutions have developed progressively more sophisticated methods of accounting for this kind of learning. Unfortunately, in the process, content knowledge has become fragmented into often unrelated bits of information, rather than rich dynamic networks of associations and relationships. Students doggedly memorize (or fail to memorize) lists, facts, terms, dates, and struggle to figure out why it all matters. In MOOs and MUDs, content knowledge can be re-integrated with contexts of use, so that students learning about anatomy, for example, might "build" a human body in a space, and explore its interrelated systems and parts by the process of constructing them as well as "moving about" in them. The content knowledge gained in this way has a better chance of being apprehended, remembered, and connected to the learner's experience.

5. Reflection

When we speak of reflection as a crucial component of learning, we are not using the term in its commonsense meaning of daydreaming or abstract introspection. We are referring to the development of the learner's ability to step back and consider a situation critically and analytically, with growing awareness of his or her own learning processes, a kind of metacognition. Learners need to develop this capability in order to use what they are learning in other contexts, to recognize the limitations or obstacles confronting them in a given situation, to take advantage of their prior knowledge and experience, and to strengthen their own performance. Evidence of development along this dimension is most often supplied by the learners themselves. We can get a sense of all five of the dimensions at play even in brief comments such as "I thought this was hard at first, but then I figured out there was a trick to it. I guess I'm starting to get good at math."

Learning as a Cultural Activity

Learning not simply a matter of individual development, however. It is also by definition a cultural activity. It takes place in and through situated cultural activities and practices. In this project, we will examine learning as it emerges from interactions within the virtual environment, and from interactions between the virtual environment and the external environment(s) in which learners and instructors participate.

Internal Cultures

By internal culture, we mean the social structures, dynamics, and assumptions that function within the tinyMUSH environment. Some of these assumptions appear as constraints imposed by the designers on participant actions; others emerge through participants' interactions with one another and with the designed environment.

External Environment

By external culture, we mean the sets of assumptions, practices and activities in which learners engage outside the virtual environment. For most learners, the MOO is one aspect of a complex learning environment that includes physical classrooms, instructors, classmates, and institutional structures such as academic departments, financial aid requirements, and the like. The culture of the MOO, therefore, and the activities and practices it supports, must be understood as situated within these external cultures. For example, learners visit the OWL for help with writing assignments that originate outside the OWL and that will be evaluated by instructors outside the OWL, who are in turn constrained by program requirements, teaching evaluations, etc. The same may hold for learners in the more open environment of AcademICK as well, though in the latter case the constraints imposed by this external culture may be less clearly defined and less closely coupled to activities inside the virtual environment.


The MOO or MUD provides a satisfying learning environment for students, though it may take some time to become accustomed to operating within it. Students accustomed to selecting items on multiple-choice tests may be disconcerted when they are expected to create new knowledge through interactions with peers and instructors in the virtual environment; students accustomed to writing formulaic, five-paragraph essays may be disturbed at being offered the freedom to create spaces and objects that represent their ideas and the relationships among them in spatial form; students for whom academic success has been a matter of moving on clearly defined paths through well charted terrain may become disoriented in the more fluid environment of the MOO.

These feelings of discomfort and disorientation are especially common among students entering a MOO for the first time on their own; anecdotal evidence suggests, however, that "traveling" with a companion has a significant mitigating effect-- and that it provides a spur to learning, because the students almost immediately begin to ask one another questions about the environment, and begin sharing the results of their inquiries; tentative exploration more quickly gives way to more systematic probing. Thus the initial disorientation gives rise to learning.

Learning In The Moo

Participants in the MOO environment must develop a certain level of technical expertise, a facility with the elementary mechanical aspects of negotiating the environment, before they can begin developing new concepts. But this is a potentially dangerous oversimplification. Feelings of disorientation and discomfort belong naturally and appropriately to demanding new situations; the sensations themselves are spurs to learning. By the same token, growing confidence in facing the intellectual challenges posed by the environment fosters technical experimentation as well: learners developing a richer understanding of complex material seek more complex forms in which to represent their understanding.

Learners may initially feel a sense of significant accomplishment, for example, when they create and describe simple objects; soon, however, they attempt to "animate" their creations, that is, to make them responsive to participants' actions; from there they may go on to endow their creations with the capacity for flexible response, so that different actions elicit different responses. There may be a further progression as well: the learner's initial investment may be in the objects he or she creates, in making those objects act in the way he or she imagines they should; but this soon leads to curiosity about the way other participants will respond to the objects in question, a curiosity which in turn becomes a more systematic effort to guide the behavior of other participants, who must do their "parts" in order for the environment to have its full effect.

Learners' requests for help provide another index of learning activity. MOO participants frequently request information and assistance, both from other participants and from the environment itself; it is far more "natural" for most participants to ask other players for help than to ask the system.

The Instructor's Role

The virtual classroom becomes a student-centered learning environment, in contrast to the teacher-centered environment of the traditional classroom. The traditional classroom is designed as a kind of broadcast medium, a stage for the instructor's presentation of knowledge. In the networked classroom, however, emphasis falls on the processes by which students engage with one another, with the instructor, with the course material, and with the virtual environment itself, in an on-going process of creating new knowledge. This shift of emphasis in no way diminishes the instructor's importance. On the contrary: the electronic classroom actually expands the instructor's role. In establishing a learning environment in a MOO, teachers must carefully define the intellectual and social framework within which the class's negotiations for understanding will take place. That is, course design must now encompass not only the syllabus and daily lesson plans, but also the design of the virtual learning environment itself, including objects and room descriptions as stages for learners' interaction. In the classroom itself, teachers may be actively involved in working with individuals and groups of students through the range of their activities; the virtual environment actually permits more substantial engagement with more students than is generally possible in the traditional classroom.

The MOO environment also permits a level of engagement among students that would be almost impossibly disruptive in the traditional classroom. This is immensely beneficial, but it is not without its problems. Genuinely collaborative learning, in which students work together to construct an understanding of the course material that none of them would be likely to arrive at by themselves, is extremely powerful and highly rewarding. It is also very different from what most students are used to. The freedom and flexibility available in the MOO-- and the responsibility these entail-- may well prove confusing and difficult at first for students who have grown accustomed to being told exactly what to do.

The open-endedness of the MOO environment may seem frustrating to students who are used to seeing teachers settle disputes. Teachers will have to help students learn to accept their new freedoms and responsibilities. Teachers will have to help them understand the difference between the irresolution that results from failure to work out something that should have been within reach and the open-endedness that comes from engaging questions to which there is no single answer; and teachers will have to help students re-frame their arguments and to discover the differing assumptions underlying their disagreements. In short, a key aspect of the teacher's role will involve helping students learn the art of collaboration, the social as well as intellectual dynamics of argumentation and persuasion.

Previous Accomplishments in the Computer Writing and Research Lab

The primary mission of the Computer Writing and Research Lab is to develop and disseminate innovative uses of computer technology for instruction in writing. Research and pedagogical practice in the Computer Writing and Research Lab and its computer-based classrooms have concentrated in three areas: the use of computer-mediated communication (CMC), especially real-time, LAN-based conferencing, as a medium for class discussion and collaborative brainstorming and idea-generation; development of hypermedia and multimedia course materials; and, most recently, pedagogical uses of Usenet newsgroups and Internet applications including MUDs and MOOs and other services as the WorldWide Web for student research and writing.

Three Areas of Concentration

Computer Mediated Communication in the Classroom

From 1986-1988, work in the Computer Writing and Research Lab concentrated on the development of integrated tools for writing instruction. This work led to the formation in 1988 of the Daedalus Group, Inc., and to creation of the Daedalus Integrated Writing Environment (DIWE). Winner of the EDUCOM/NCRIPTAL Award for Best Writing Software (1990), DIWE is a suite of applications that include invention heuristics to help students select and investigate possible topics before beginning to compose a formal essay, and a corresponding tool for prompted peer review; a LAN-based electronic mail/BBS system; utilities for turning in and viewing documents; and a simple word processor. The heart of the system is the real-time conferencing module, InterChange®, which has revolutionized writing instruction at the University of Texas and at hundreds of other institutions in the United States and elsewhere.

Hypertext and Hypermedia Projects

The Computer Writing and Research Lab supports a number of hypertext projects. Written Argumentation, published by Intellimation in 1993, is a HyperCard-based tutorial (since ported to ToolBook) designed to help students in first-year writing courses explore the complex concepts central to the notion of argumentation that underlies required writing courses at the University of Texas and many other colleges and universities. Poetic Conversations is an on-going HyperCard project in which students create hypertext documents reconstructing relationships among twentieth-century American poets; Bog People, composed in StorySpace, is an exploration of contemporary Irish poet Seamus Heaney's work. WebWriter is a ToolBook application for use in sophomore-level writing and literature courses; it enables students to create hypermedia "webs" emanating from a specific starting-point or center such as an assigned text. Our most ambitious project to date has been This Is Not a Textbook, a collaboratively-created, HyperCard-based, multimedia document on representations of technology designed for a new course about the impact of information technology on writing and literacy.

Internet and Related Services

Members of the Computer Writing and Research Lab staff have developed considerable expertise on pedagogical applications of the Internet and related services such as Usenet. Students in first-year writing courses, for example, gather information and opinions from Usenet newsgroups and Internet sources and evaluate them for reliability and currency as compared to materials available through traditional print media available in University libraries. A number of instructors are now making extensive use of the World Wide Web in their classes, not just as consumers of information but also as providers: rather than write traditional essays, students in these classes produce documents suitable for publication on the World Wide Web. The Computer Writing and Research Lab also maintains a home page (http://www.cwrl.utexas.edu), and recently (December 1994) published the first issue of a Web-based, refereed journal, CWRL, dedicated to the intersections of computers, writing, rhetoric, literature, and learning. The Computer Writing and Research Lab also maintains a tinyMUSH, AcademICK, discussed elsewhere in this proposal.

Facilities & Equipment

This project will take advantage of the facilities afforded by the Computer Writing and Research Lab (CWRL) in the Division of Rhetoric and Composition at UT Austin.

Computer Facilities

The Computer Writing and Research Lab (CWRL) was founded by Jerome Bump in 1986 with the support of an equipment grant from Project QUEST. Since then, the CWRL has earned an international reputation for innovation in teaching and research. As of January, 1995, the CWRL consists of five separate facilities in two buildings: three computer-based classrooms, a multimedia lab, and a research lab where instructors develop computer-based course materials. Two of the classrooms are equipped with PowerPC-based Macintoshes; computers in the third classroom are powered by 66-MHz Intel 486 processors. The multimedia lab is outfitted with Quadra 700s, and the Computer Writing and Research Lab itself houses a mixture of machines, including a NeXT workstation that functions as a server for the CWRL's tinyMUSH, or text-based virtual reality, called AcademICK, and for the CWRL's World Wide Web site. All CWRL facilities are connected to the Internet.

Classroom Activities

Students and instructors in the computer classrooms take advantage of the local area network connecting the computers to exchange essays, drafts, critiques, and other documents, and to hold computer-based discussions. These activities are coordinated by the Daedalus Integrated Writing Environment, which won the prestigious EDUCOM/NCRIPTAL award for Best Writing Software of 1990. Designed by former CWRL staff members specifically for the English classroom, the Daedalus software is now in use at more than 200 campuses on three continents, and is playing a major role in re-shaping pedagogical theory and practice. The combination of excellent equipment, outstanding software, and a superb staff of graduate students confirms the CWRL's position as the most important facility of its kind in the nation.

The Research Lab

Graduate students, undergraduates, and faculty use the CWRL's facilities and equipment to explore new ways of using information technology in research and instruction in rhetoric and composition, literary studies, hypertext and hypermedia, multimedia, and other new forms of electronic discourse. These new techniques are then taken into the classroom, where they assist students both in learning to read traditional and new texts, and in learning to produce traditional essays as well as new textual forms. Classroom activities, in turn, become the focus of research which is presented by CWRL staff at national and international conferences such as Computers and Writing, the Conference on College Composition and Communication, the Modern Language Association, and the European Society for the Study of English.


The CWRL will undergo a significant expansion over the next several years. The Division of Rhetoric and Composition expects to teach approximately 85 per cent of its undergraduate writing courses on-line by Spring 1999. This will require not only the construction of new computer classrooms but also a major commitment to curriculum development and to instructor training. English Department faculty and graduate students, who have been intimately involved in these projects, will also continue to develop computer-based components for existing literature courses as well as exciting new courses.

Key Personnel

Dr. John M. Slatin has been Director of the Computer Writing and Research Lab at the University of Texas at Austin since January 1989. A member of the University of Texas faculty since 1979, he has been teaching computer-based writing and literature classes since 1986, when he developed and taught a course for visually impaired students using synthetic speech as a key aid to writing and revision. He has participated actively in the design of the Daedalus Integrated Writing Environment, winner of the EDUCOM/NCRIPTAL Award for Best Writing Software (1990), now in use on more than 200 campuses in the United States, Canada, Asia, and Europe. Under Slatinęs direction, the Computer Writing and Research Lab has gained an international reputation as a leader in integrating computers into instruction in writing and literature. As a visually impaired person and a student of interface design, Slatin is especially interested in issues of software accessibility and navigation. Since 1987, Slatin has written extensively on the educational uses of hypertext and computer-mediated communication, and lectured in the United States and Europe on the impact of information technology on the study of English and other humanities disciplines. He is currently completing a book entitled Constructive Criticism: The Impact of Information Technology on English Studies, forthcoming from Ablex. He has developed an innovative Ph.D. program in Computers and English Studies, and pioneered many new courses at the undergraduate and graduate levels; the most recent are a graduate seminar, Electronic Discourse, and a multi-section writing course, Computers and Writing, for which Slatin co-authored a multimedia document, This Is Not a Textbook. In the Spring semester 1995 he co-taught a Fine Arts course on Virtual Environments, Cyberspace, and the Arts, in collaboration with dancer/choreographer Yacov Sharir, also of UT Austin, and visual artist Diane Gromala, now at the University of Washington, creators of the virtual reality performance piece Virtual Bodies (1994).

Dr. Margaret Syverson is an assistant professor in the Division of Rhetoric and Composition at the University of Texas at Austin, where she teaches and conducts research in the Computer Writing and Research Lab. She was a consultant to the California Department of Education during the development and implementation of an innovative model for statewide assessment in reading and writing, the CAP Writing Assessment and CAP Reading Assessment, later called the CLAS assessment. These assessments pioneered the use of matrix sampling and the scoring of complex texts via feature analysis. More recently, she has served as research consultant on evaluation for the Learning Record (CLR), a program funded by the California Department of Education and the University of California to provide an alternative to on-demand standardized testing for Chapter One purposes. The Learning Record incorporates portfolios of student work, teacher observations, interviews with students and parents, and conventional classroom evaluations to provide a multidimensional analysis of learning which can be used both for classroom purposes and for large-scale assessment. She has also developed computer-based resources to support teachers using the CLR through on-line discussion groups and the hypertextual electronic version of the CLR. Dr. Syverson's research focuses on the applications of theories of distributed cognition and complex adaptive systems to teaching and learning environments.

  1. Barr, Mary. The Learning Record Handbook. San Diego: Center for Language in Learning, 1993 and 1994.
  2. Barrett, Edward, ed. The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information. Technical Communication and Information Systems. Cambridge and London: MIT, 1989.
  3. Barrs, M., S. Ellis, H. Hester, A. Thomas. The Primary Language Record: Handbook for Teachers. Portsmouth, NH: Heinemann.
  4. Barrs, M., S. Ellis, H. Hester, A. Thomas. Patterns of Learning: The Primary Language Record and the National Curriculum. London: Centre for Language in Primary Education, 1990
  5. Berk, Emily, and Joe Devlin, eds. Hypertext/Hypermedia Handbook. New York: McGraw-Hill, 1991.
  6. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J.: Lawrence Erlbaum, 1991.
  7. Chernaik, Warren, Caroline Davis, and Marilyn Deegan, eds. The Politics of the Electronic Text. Oxford: Office for Humanities Communication, 1993.
  8. Darling-Hammond, L. Performance-based assessment and educational equity. Harvard Educational Review, 1994, 64 (1), 5-30.
  9. Falk, B. and Linda Darling-Hammond. The Primary Language Record at P.S. 261: How Assessment Transforms Teaching and Learning. New York: Columbia University, 1994
  10. Feltovich, Paul J., Rand J. Spiro, Richard L. Coulson. Learning, Teaching, and Testing for Complex Conceptual Understanding. in Test Theory for a New Generation of Tests, Norman Frederiksen et. al eds., Hillsdale, NJ, Lawrence Erlbaum Associates
  11. Fleck, Ludwik. Genesis and Development of a Scientific Fact. Chicago: University of Chicago Press, 1979.
  12. Gibson, William. Neuromancer. New York: Ace, 1984.
  13. Haraway, Donna J. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
  14. Helgeson, S.L. Assessment of Science Teaching and Learning Outcomes, 1992 (Monograph No. 6) National Center for Science Teaching and Learning.
  15. Hofstadter, Douglas R. Metamagical Themas: Questing after the Essence of Mind and Pattern. New York: Bantam, 1985.
  16. Holland, John H. Complex Adaptive Systems. Daedalus 121.1 (1992): 17-30.
  17. Hutchins, Edwin, and Brian Hazlehurst. Learning in the Cultural Process. in Artificial Life II. Ed. C. G. Langton, et al. Santa Fe Institute Studies in the Sciences of Complexity. New York: Addison Wesley, 1991. 689-706
  18. Hutchins, Edwin. Cognition in the Wild. MIT Press, In Press.
  19. Krueger, Myron W. Artificial Reality II. Reading, Mass.: Addison-Wesley, 1991.
  20. Latour, Bruno. Visualization and Cognition: Thinking with the Eyes and Hands. Knowledge and Society: Studies in the Sociology of Culture Past and Present. JAI Press, 1986. 1-40.
  21. Laurel, Brenda Computers as Theatre. Reading, Mass.: Addison-Wesley, 1991.
  22. Lave, Jean. Situating Learning in Communities of Practice. Perspectives on Socially Shared Cognition. Ed. lauren B. Resnick, John M. Levine and Stephanie Teasley. Washington, D.C. American Psychologoical Association, 1991. 63-82.
  23. McKnight, C., A. Dillon, and J. Richardson, eds. Hypertext: A Psychological Perspective. Hornwood Series in Interactive Communication Systgems. New York and London: Ellis Hornwood, 1993.
  24. Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1986.
  25. Miserlis, Susan. The Classroom as an Anthropological dig: Using the California Learning Record (CLR) as a Framework for Assessment and Instruction. In Learning from Learners, 57th Yearbook of the Claremont Reading Conference. Claremont, CA: Claremont Graduate School.
  26. Moss, Pamela A. Can There Be Validity Without Reliability? Educational Researcher, Vol. 23, Number 2. March 1994. pp. 5-12.
  27. Nelson, Theodor Holm. Literary Machines. San Antonio: n.p., 1987.
  28. Pagels, Heinz R. The Dreams of Reason: The Computer and the Rise of the Sciences of Complexity. New York: Bantam, 1989.
  29. Papert, Seymour. The Childrenęs Machine: Rethinking School in the Age of the Computer. New York: Basic Books, 1993.
  30. Ryan, Marie-Laure. Possible Worlds: Artificial Intelligence and Narrative Theory. Bloomington and London: Indiana UP, 1991.
  31. Sadler, D. Royce. Specifying and Promulgating Achievement Standards. Oxford Review of Education, Vol. 13, No. 2, 1987
  32. Selfe, Cynthia L. and Susan Hilligoss, eds. Literacy and Computers: The Complications of Teaching and Learning with Technology. Modern Language Association Series on Research and Scholarship in Composition. New York: Modern Language Assocation, 1994.
  33. Slatin, John M. "Hypertext and the Teaching of Writing." Edward Barrett, ed. Text, Context, and Hypertext: Writing with and for the Computer. Cambridge: MIT press, 1988. 111-129.
  34. Slatin, John M. "Is There a Class in this Text? Creating Knowledge in the Electronic Classroom." Edward Barrett, ed. Sociomedia: Multimedia, Hypermedia, and the Social Construction of Knowledge. Cambridge: MIT Press, 1992. 28-52.
  35. Slatin, John M. "Reading Hypertext: Order and Coherence in a New Medium." College English 52 (1990): 870-883.
  36. Slatin, John M. "Text and Hypertext: Reflections on the Role of the Computer in Teaching Modern American Poetry." David S. Miall, ed. Humanities and the Computer: New Directions. Oxford: Oxford UP, 1990. 123-157.
  37. Sproull, Lee, and Sara Kiesler. Connections: New Ways of Working in the Networked Organization. Cambridge and London: MIT, 1991.
  38. Suchman, Lucy A. Plans and Situated Actions. Learning in Doing: Social, Cognitive, and Computational Perspectives. New York: Cambridge University Press, 1987
  39. Syverson, M. A. The Wealth of Reality: An Ecology of Composition. Dissertation. University of California, San Diego, 1994.
  40. Thomas, S. O. Rethinking Assessment: Teachers and Students Helping Each Other Through the 'Sharp Curves of Life.' Learning Disability Quarterly, 1993, vol. 16 (Fall), 257-279
  41. Tuman, Myron C., ed. Literacy Online: The Promise (and Peril) of Writing with Computers. Pittsburgh and London: Pittsburgh UP, 1992.
  42. Turkle, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984.
  43. Winograd, Terry, and Fernando Flores. Understanding Computers and Cognition: A New Foundation for Design. Reading, Mass.: Addison-Wesley, 1986.
  44. Woods, David D. Coping With Complexity: The Psychology of Human Behavior in Complex Systems. in Tasks, Errors, and Mental Models : A Festschrift to Celebrate the 60th Birthday of Professor Jens Rasmussen. eds. L.P. Goodstein, H.B. Andersen, S.E. Olsen. London ; New York : Taylor & Francis, 1988.
  45. Zuboff, Shoshana. In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books, 1988.

Questions? Email Peg Syverson: syverson@uts.cc.utexas.edu or John Slatin jslatin@mail.utexas.edu

© 1995-2006 M. A. Syverson