Int. Journal of Business Science and Applied Management, Volume 5, Issue 1, 2010
Broadening the focus of evaluation: An experiment
Subrata Chakraborty
Jaipuria Institute of Management
Vineet Khand, Gomti Nagar, Lucknow - 226010, India
Tel: +91 522 2394296,
Email: sc@jiml.ac.in
Shailja Agarwal
Jaipuria Institute of Management,
Vineet Khand, Gomti Nagar, Lucknow - 226010, India
Tel: +91 522 2394296
Email: shailja@jiml.ac.in
Abstract
Evaluation of student performance in any course, especially those delivered in a management
programme, poses a serious challenge; more so, in a course like „Business Communication‟, where oral
communication ought to form an integral part of evaluation. This paper presents various details of an
experiment, conducted with a view to introduce this much needed component in the evaluation process.
Essential purpose of the exercise was to try and broaden the focus of evaluation, simultaneously
enlarging its scope. The need to maintain certain amount of objectivity and transparency was taken as
critical. Group Discussion was used as a tool. A process was developed with the objective of getting
every student evaluated on both written as well as non-written skills. A two-sided evaluation
mechanism was put in place to achieve the dual purpose of leaning and evaluation. Statistical analysis
of the results suggests that the experiment was a useful one. The student feedback, too, was favourable.
Keywords: business communication, non-written skills, written skills, group discussion, evaluation
Int. Journal of Business Science and Applied Management / Business-and-Management.org
2
1 INTRODUCTION
Education in business management has gained considerable popularity over the last couple of
decades. Several business schools have sprung up in different parts of the world to cater to this
seemingly growing demand. Despite this significant quantitative growth in numbers of schools, quality
of education provided in business schools often comes under a question mark. Even while there is a
recognition that management is more experiential than experimental, and more a state of the art than
being formulaic, classroom activities largely remain confined to mere theoretical discourses.
Among the various courses taught in a management program, those dealing with promotion of
communication skills assume particular importance. This is so because, in the discharge of one‟s day-
to-day functions, effective communication oral as well as written plays a critically important role.
How to build the needed skills remains a key challenge before many business schools. This is so
because not only are there issues relating to language of communication, there are also other aspects
like mannerism, body language, etc. Schools frequently, struggle to address this challenge effectively.
Sometimes a bigger challenge is faced in coming up with fair and objective evaluation of students
while the course is in progress, including at the stage of its completion. Problems arise because of
certain stated and implied needs. These are: reliability, validity, objectivity and verifiability. A proper
solution often remains elusive.
This paper constitutes a step towards addressing the above four needs. An experimental approach
was undertaken. Outcome of the experiment, developed and used recently, on a batch of first trimester
students pursuing a one credit compulsory course on Business Communication, in a-two-year graduate
management program is shared in this context. Group Discussion is the tool used for the purpose. The
focus was on the assessment
of Listening, Speaking, Reading and Writing skills, technically coined as
LSRW skills. Two questions are addressed: (i) how can oral as well as written skills be simultaneously
incorporated in an assessment tool? and (ii) how effective can peer assessment be?. The paper reports
the details of what was done to reconfigure assessment processes dovetailing traditional paper-and-
pencil assessment by the instructor with those of the peers. Analysis of the results seems to suggest that
not only can oral and written skills be assessed simultaneously; the technique used can also prove to be
useful in catering to the four needs (reliability, validity, objectivity and verifiability) outlined above.
2 LITERATURE SURVEY
2.1 Evaluation Challenges in Business Schools
In most educational programs, a substantial proportion of teacher and student time is devoted to
activities which involve (or lead directly to) evaluation by the teacher (Crooks, 1988). The same is true
of a program in business management. Though the idea of evaluation „generally evokes groans‟
(Feinberg, 1979) from the instructors as well as the students, it has powerful impacts- direct and
indirect, positive or negative, deserving considerations towards a very careful planning and
implementation.
Evaluation also serves as a communicative device between the world of education and that of the
wider society. Since the results of any particular assessment device must be accorded „trust‟ by the
stake holders if the consequences are to be acceptable, different parts of the world continue to be
grappling with assessment challenges (Broadfoot and Black, 2004). New tools of evaluation like use of
reflection in evaluation (Thorpe, 2007), in-basket writing exercise (Feinberg, 1979), business games
(McKenney, 1962) etc. are constantly being experimented upon and developed. Such experimentation
helps in enriching our understanding of the complexity of the many links that may exist between
assessment and learning and their various interplays. Further these provide certain advances to assess
the link between teachers‟ practices in formative and in summative assessment, and to construct some
alternatives towards strengthening the quality and status of teachers‟ summative assessments.
In a business education program, development of a student‟s ability to apply skills and knowledge
in a variety of contexts is a critical need (Broadfoot and Black, 2004). Therefore, assessment of student
progress in acquiring this ability becomes imperative. However business education in India, and also in
many parts of the world, seems to depend primarily, if not exclusively, upon the traditional
examination system for achieving this. One apparent reason for doing so is that the method is
transparent and verifiable. Another reason could be that many Business Schools, inadvertently or
otherwise, tend to focus more upon content knowledge and hence end up using examinations to test
such content knowledge in students (Ogunleye, 2006). Students are assessed during two years of their
study using an array of examinations. However, to be fair to these schools, it must be said that the tools
available to make assessments are also limited. The need, therefore, is to design a systematic
Subrata Chakraborty and Shailja Agarwal
3
evaluation design mechanism which, on one hand, should be transparent and objective and, on the other
hand, should achieve the intended purpose. As is the case in many other courses, evaluation remains a
sensitive as well as a contentious aspect of the business communication course too. Needless to say, it
elicits the same groans from students and instructors. Before proceeding further, it may be beneficial to
remind ourselves of the primary objective of a business communication course, which is to improve
communication skills of students. These skills are to be improved and assessed as a whole rather than
limiting only to some components, predominantly the written skills alone.
2.2 Dissatisfaction with Evaluation in Business Communication
Dissatisfaction with tests currently used to assess communication ability is neither new nor
uncommon. Homer L Cox, in his study, as far as 1970, observed: “Overall, educators agreed that they
were most dissatisfied with, and students were weakest in, ability to communicate in writing; however,
dissatisfaction with tests and weakness observed varied in other areas of communication. It is probably
safe to assume that other areas of communication ability are not being tested as frequently as ability to
write, and weakness in these other areas may not be accurately assessed. The fact that other areas are
undoubtedly less frequently measured may indicate that weakness in these areas is less easily assessed.
Most effort seems to be made in improving writing ability, but writing ability remains the greatest
weakness. Of course, we do not know how much worse the situation might be if efforts to improve this
area were not made; but, on the other hand, we do not know how effective present efforts are. Writing
may lend itself to testing; whether it should get the greatest amount of attention has not been clearly
established.”
Arguably, while the “English further education sector can be described as a hotbed of
qualifications” (see Cantor, Roberts and Pratley 1995); it is only the written communication skills that
are generally evaluated. It must be remembered that good communication skills comprise the four
major aspects of communication- LSRW. Of course, ability to distinguish between fact and assumption
is also a vital part of communication skills as are a number of other abilities, but a test feasible in a
limited span of time can include only the items which are basic to all others, namely: LSRW.
Ironically, even all these skills do not get evaluated in the traditional system of examination that is
followed in communication skills evaluation in Indian Business Schools and across. Generally it is an
assessment of writing skills through writing while research has established the importance of oral skills
as well with the corporate (Mainkar and Avinash, 2008; Maes, Weldy and Icenogle, 1997; Cox, 1970).
As mentioned earlier, research (Cox, 1970) establishes that assessment in areas other than written skills
is less frequently measured whereby indicating that weakness in these areas is less easily assessed;
hence there appears to be an acute need to develop such tools as may be helpful to assess these other
areas, i.e. non-written skills.
2.3 Peer Assessment and Group Tasks
Studies in the past have shown firm evidence that innovation in fine-tuning the evaluation process
yields substantial learning gains (e.g. Crooks, 1988; McKenney, 1962). Peer learning has been
identified as a valuable strategy for teaching and learning (Broadfoot and Black, 2004). But, peer
assessment, which could be an equally important strategy, has not been sufficiently explored.
The benefits of peer learning were established long before the 1970s, when education research
began to focus on such approaches (for an overview, see Jacobs and Hannah, 2004). But, little work
has been done on the benefits of peer assessment and on making students play a vital role in awarding
marks to their fellow compatriots. It is widely accepted that „alternative methods of assessing student
knowledge‟ (Desrochers, Pusateri and Fink, 2007) are useful since assessment, largely, is a pointer
towards the received curriculum. Research (Krashen, 1981) has focused on the importance of a rich and
varied input as a prerequisite for learning to take place. In this light, the output, and evaluation of this
output, becomes equally significant. As was mentioned earlier, typically the method used for evaluation
is written examination, ending up assessing how well the inputs provided in the class have been
received in a theoretical sense as opposed to a task oriented assessment. This method, if used with
some thought, can probably end up assessing all the four LSRW skills of a student. In case there are
time constraints, and one wants to use the latter method, a group task can be considered to attain the
objectives, but group work per se does not create opportunities for learning. Important conditions in
group tasks are that group members must be encouraged to (i) share; (ii) jointly analyze and evaluate
the ideas; (iii) come to a joint solution of the problem; and (iv) share the ownership of a product
(Mercer, 1995; Storch, 2002). Group assessment tasks are now being designed by large-scale
assessment programs (Fall and Webb, 2000), however, whether or not these tasks serve as a tool of
evaluation of the LSRW skills, is yet to be known.
Int. Journal of Business Science and Applied Management / Business-and-Management.org
4
An important objective of evaluation is to be able to provide students with an immediate and
constructive feedback. Psychologists have observed that feedback on the effectiveness of a person‟s
performance enhances learning and influences future performance (Feinberg, 1979). While “talk” as an
aid to learning is an accepted way to provide classroom input, it is not extremely clear whether such
“talks” are indeed useful in bringing a range of effects in specific interactions. So much so, it needs to
be studied whether “talks” can be used for evaluation purposes. It comes to be seen that participants in
group discussions naturally tend to limit effective participation of certain other participants (Miragua,
1964). Equal participation among group members is uncommon, as almost about 40 percent of total
talk time in discussion in groups with sizes as small as three and as large as eight is taken by the most
active participant (Bales, 1970). According to Koschmann, Kelson, Feltovich, and Barrows (1996),
meaningful group discussions can lead to effective learning by way of students engaging in deep
reflections on their ideas. By self-reflection and by adding others‟ perspectives to their own reflections,
learners learn to integrate new ideas into their existing knowledge. Also, the processes involved in
asking questions, responding to questions, and elaborating upon these responses, all contribute to
learning (Cohen, 1994; Slavin, 1996). Research also supports the hypothesis that group discussions can
contribute to increased self-efficacy.
Mainkar and Avinash (2008) in their study observe that although practiced widely, grading of
student participation in class discussions has been often criticized by researchers. They further observe
that, in such discussions, the instructor simultaneously adopts two incompatible tasks- of facilitating
class discussion and of evaluating student participation. Students‟ focus, in such situations is on earning
points instead of on drawing learning. Instructor-based grading schemes do not motivate all students
equally. In summary, evaluation poses both a challenge as well as an opportunity. It is a challenge
because the process has to be fair and objective and yet deliver achievement of the intended purpose. It
is an opportunity because evaluation can be innovatively designed to cope with these challenges and
also use it to impart learning. The present study constitutes a humble attempt in this direction.
2.4 Research Proposition
Evaluation of student performance in any course, especially those delivered in a management
programme, poses a serious challenge, more so, in a course like „Business Communication‟ where oral
communication ought to form an integral part of evaluation. It also needs to be remembered that
effective evaluation, based on all the components of any course, lends appropriate seriousness to the
course and its modules. Research establishes that classroom evaluation has powerful impacts- direct
and indirect; positive or negative, and thus deserves very careful planning and implementation.
(Crooks, 1988)
The present study, keeping these concerns in mind, proposes to explore the following
propositions:
1. Whether the method adopted does any better?
2. Is the method effective?
3. Is the method setting independent?
4. How replicable is the method?
3 THE STUDY
3.1 The Problem
This paper presents various details of an experiment, conducted with a view to introduce this
much needed component in the evaluation process. Essential purpose of the exercise was to try and
broaden the focus of evaluation and simultaneously enlarging its scope. The need to maintain certain
amount of objectivity and transparency was taken of getting every student evaluated on both written as
well as non-written skills, and keeping as critical. Group Discussion was used as a tool. A process was
developed with the objective of getting every student engaged as an active participant in the process. A
two-sided evaluation mechanism was put in place to achieve the dual purpose of learning and
evaluation. This was done not only to ensure objectivity and participation but also to provide the entire
class a feel of how individuals behave when involved discussions take place. Statistical analysis of the
results suggests that the experiment was a useful one. The student feedback was favourable too.
One might ask: Why seek experimental evidence of the impact of one assessment tool when few
other standard evaluation methods have been accepted and established? One reason is to add to and
gain acceptance within the accepted evaluation tools that have been experimented upon and developed
gradually and that have proved themselves by their quality. Perhaps of greater importance is to develop
a design enabling business communication instructors to evaluate students on something more than
Subrata Chakraborty and Shailja Agarwal
5
written skills. Time and again various stakeholders have emphasized on the possession of both verbal
and non-verbal communication skills with the business management students (Gray, Ottesen, Chapman
and Whiten, 2007) and while business communication syllabus across Indian business schools is a
balanced mix of both written and non-written skills, the evaluation pattern, across the globe, is such
that there is little provision of assessment on non-written skills. Hence, though the non-written modules
of the business communication course do get taken up, there is little evaluation upon them, thus leaving
a sense of incompleteness not only in terms of instructor and the course delivery but also in terms of
students having a feeling of acquiring the said skills. The reasons behind this dichotomy could be:
1. Evaluation of non-writing skills could be too time consuming with an average batch of
sixty students.
2. Lot of subjectivity might creep in or could be suspected leading to loss of „trust‟ in the
evaluation process, which, according to research is crucial to the acceptance of the
evaluation result. (Broadfoot and Black, 2004)
3. Evaluation of non-writing skills might not be accorded proper seriousness amongst
students.
Despite the limitations observed above, the community of business communication faculty has
very often felt the need of evaluating the non-written skills of students as well but only after
overcoming these constraints. (Badenhausen; Eileen; Lesley and Robert, 2000)
3.2 The Objective
Keeping in focus the above constraints and the stakeholders‟ concern, an experiment was designed
and implemented with the following objectives:
1. To evaluate students both on written and non-written skills simultaneously.
2. To create learning opportunities for students.
3. To enable students to receive an immediate instructor and peer feedback.
4. To conduct the evaluation in a manner that there would be little scope of any element of
subjectivity in the process.
5. To present a challenge to the students so that there is no lack of seriousness amongst
them.
3.3 Demography
The study was conducted at an AICTE (All India Council for Technical Education) accredited
institution offering a two-year graduate management program. The experiment, as a part of end-term
evaluation, was developed and used on a batch of fifty-seven students, pursuing a one credit
compulsory course on business communication as a part of the program. All the participants were non-
native speakers of English, 8 students were females and 49 were males. Female participants were
comparatively few in number as the batch itself had very few female students which did not seem to
have rippled any effect on the experiment given its objective nature. 31 participants had taken their
schooling from English medium instruction, 23 from Hindi medium instruction and 3 from Vernacular
medium. All participants were between the age group of 20 to 30 years with an average age of 23 and
with 10 students having prior work experience.
3.4 Tool Development
Group Discussion was taken as the tool of assessment as research indicates that group discussion
is suitable for assessment process. (Glauco Devita, 2000; Joan Swann, 2007) The process was designed
in a manner that a student would be tested on both written and non-written skills simultaneously
through participating in the entire process. A two-way evaluation criterion was designed to ensure
objectivity. That is both peer and faculty would conduct the evaluation by awarding marks to the
students participating in the group discussion. Thus, while each student was himself/ herself getting
evaluated, he/ she was also evaluating a set of pre-allotted students of the batch. This was done in order
to meet all the objectives explained earlier. Another objective behind involving students in the
evaluation process was to educate them on handling responsibility with accountability, one of the key
skills expected of a manager.
The class was divided into groups of eight members each, thus forming seven groups. This led to a
total of fifty-six students. As the batch was of fifty-seven students, one group had nine members so as
to accommodate the extra student.
While one group would participate in the group discussion, the members of the other six groups
were required to evaluate one different member per group on pre-set parameters. Thus, each student
would be evaluated by six students (one student per group) and would also evaluate the group
Int. Journal of Business Science and Applied Management / Business-and-Management.org
6
discussion performance of six students i.e. one student/ group. This means that at all times, students
would either evaluate a peer or be evaluated themselves by peers. Apart from this, work constituting
the written evaluation of peer evaluators would proceed simultaneously.
The entire procedure was video taped in order to further assess the receptivity and involvement of
the students to the new mode.
The procedure had two parts, each of 10 marks, running concurrently:
Non-written Evaluation
Written Evaluation
3.5 Non-Written Evaluation
Major aspects of non-written skills were considered and an Assessment Sheet was designed, to be
used both by the students and the faculty member. (Figure 1, Appendix 1)
A cumulative weight age of 50% was given to student evaluation and 50% to faculty evaluation
Since there were seven groups comprising eight members each, each student had the opportunity
of participating in one group discussion and evaluating one student each from the other groups when
they had the group discussion, thus giving each student a chance to be responsible and accountable for
the evaluation of six students.
Hence, at any point of the procedure, the students were either participating in the group discussion
or evaluating one of their batch mates. Thus, each student, undertaking the group discussion, was
assessed on pre-determined parameters, making a total of 120 marks. These marks were later scaled
down to 5 marks (50% of 10 marks) and added to the 5 marks by the faculty member (scaled down to 5
from 20), who also assessed the students on the same parameters.
N=57
No. of groups= 7
No. of members per group= 8 (except for one group which had nine members)
Each respondent evaluated by= six respondents (one member per group, excluding his own group)
+ one instructor
MM= 20 (per student) + 20 (instructor)
Therefore, 120 marks (scaled down to MM=5) + 20 marks (scaled down to MM=5)
Thus, 5 marks (peer evaluation) + 5marks (instructor evaluation) = 10 marks.
To ensure maximum objectivity amongst the students, groups were formed ensuring that there
was no overlapping, i.e. no two students evaluated each other. Attendance Sheet was used to divide the
students into groups. Hence, there was no selection of students in any manner for group formation. Sets
of eight students, in order of their enrolment numbers were formed, making one group (G-1, G-2 and so
on). Thus eight heterogeneous groups were formulated. This sheet (Appendix 5) was displayed to the
respondents towards the beginning of the evaluation process. The respondents were not aware of the
process prior to the process.
3.6 Written Evaluation
While the students were assessing the group discussion performance of the students allotted to
them, simultaneously, they were to justify in writing, in about seventy-five words per evaluation, why
they thought the student deserved particular marks. Thus, they needed to critically comment on the
performance of six students each. While this ensured their accountability towards the awarding of
marks, it also comprised their own written evaluation of ten marks to be awarded by the faculty
member. This meant that their awarding marks to a particular student contributed to his evaluation but
their written comment on his performance led to their own written evaluation. It is assumed here that
the test was not on classroom instruction but on language proficiency- a component of LSRW, their
listening skills, their receptivity to what was discussed, judgment of its relevance and consequently of
communication skills.
4 METHODOLOGY
Topics were allotted one week prior to the group discussion as evaluation component was
attached. On the day of the experiment, the detailed procedure was explained to the batch. The list of
group division and who would evaluate whom was displayed on a LCD screen. (Appendix 5)
Assessment Sheets (Appendix 1, Figure 1) and writing sheets were circulated. The assessment
parameters were explained thoroughly. The Assessment Sheets carried the names of all the students
Subrata Chakraborty and Shailja Agarwal
7
with the instruction that they would only evaluate the students according to the list on display. The
entire procedure, which took approximately three hours, was video taped.
To further analyze the objectivity of evaluation and validity of results, statistical tests were
conducted on the marks awarded by peer evaluators and instructor. To test the receptivity of the
technique among students, a questionnaire was administered on the participants after the process was
completed.
5 RESULTS/ DISCUSSION
Since it was the first time such intensive two-way evaluation procedure was experimented upon,
some trepidation regarding the effectiveness was natural. The major concerns were:
1. Its receptivity and acceptance among the students.
2. Would peer assessment be as objective as intended?
3. Would a simultaneous assessment of written and non-written skills be effective?
Students preferred the group discussion assessment condition more and also perceived it as a more
accurate measure of their communication skills. Some research suggests that group discussion (Myers,
2007) did not emerge as a very effective technique in promoting learning but the present study suggests
that if exercised with complete clarity, it could be a useful technique for learning and evaluation.
Cox (1970), in his study indicated that a test brief enough should approximately be of 90 minutes.
The current process took approximately 180 minutes, but considering the fact that the test successfully
faced a major challenge of evaluating students on more than written skills alone, the time duration
appears to be suitable.
A very significant finding of the technique was that, in the non-written evaluation, when the marks
awarded by the faculty (M=3.49. Std. = .71) and students (M=3.63. Std. = .55) were scaled down to 5
marks each, in 63.15% cases, the marks awarded were the same. This is validated by the mean values
and standard deviation values of the peer assessment (M= 12.67, Std. = 2.06) and faculty assessment
(M= 12.29, Std. = 2.51) on 20 marks each. It is important to note here that this observation was only a
bi-product of the technique and it served the purpose of substantiating the fact that that an objective
assessment can be made possible, even through peer assessment. (Appendix 2, Table 1)
Appendix 3, Table 2, shows that the mean value of the students‟ evaluation of group discussion
performance of the students was 3.63 with a standard deviation of .56, whereas the mean value of the
faculty evaluation of group discussion performance of the same students was 3.49 with a standard
deviation of .71. It can be said that in general, the peer assessment of the group discussion performance
was slightly higher as compared to that of faculty evaluation which is acceptable as student
benchmarks would any time be a bit lower than the faculty benchmark. The fact that the student peer
assessment was slightly higher than the faculty assessment does not lead us to conclude that there was a
play for marks as has been suggested by Mainkar (2008). The reasons behind this conclusion could be
that variation was not very high and secondly, since no student was evaluating one another, no apparent
benefit seemed to have been achieved by marking somebody on the higher side.
A high variation (Std. = .71) was observed in case of faculty assessment of students‟ performance.
It indicates the objectivity of faculty evaluation of the group discussion indicated by high differences in
faculty assessment scores. Higher coefficient of variation in case of faculty assessment (coefficient of
variation = 21%> 16%) supports higher relative variation in case of faculty assessment. The testing of
hypothesis between the means of peer assessment and faculty assessment (1.63< 1.84 at .01 level of
confidence) also validates the above conclusion.
A correlation analysis of the same further verifies this. Appendix 4, Table 3, shows that students‟
evaluation and faculty evaluation are found to be moderately correlated. (Correlation = .56) at .01 level
of significance. It can be said that in 99% cases there would exist a significant positive correlation
between peer and faculty evaluation barring the 1% chance factors. Therefore, the results suggest that
faculty and peer both follow the same pattern to a moderate extent.
This revelation leads to certain very interesting conclusions. It perhaps is reflection on the clarity
of the assessment parameters to the students. Also, the batch should be given credit for being actually
objective in their approach towards evaluating their peers. It also points out that students are well aware
of the right skills to be used in group discussion but their performance suffers due to certain other
external factors. What these external factors are needs to be further researched.
It was also observed that since evaluation was involved and topics were pre-determined, students
performance was better. Significantly, the usual errors that students make in regular group discussions
like grammatical errors, poor structuring of thoughts, improper non-verbal signals etc were far less in
number. It needs to be studied if preparation of the topic helps in reducing behavioural, para-language
Int. Journal of Business Science and Applied Management / Business-and-Management.org
8
and body language errors. Further, research needs to assess what factors lead to making same errors
when the students are required to express themselves extempore.
However, grammatical and other language errors in the writing part appeared to be almost similar
as that of in a standard examination, though a standard examination is on a pre-decided curriculum and
practice is possible, while in this case, the fact that written assessment would also be a part of the
technique was revealed to the students when the process started and there was no pre-determined
syllabus. This perhaps leads us to conclude that grammatical correctness comes from correctness of
thought rather than practicing for a short period of time. This leaves a major scope for further research.
A post questionnaire based feedback of the technique revealed that an overwhelming number of
students appeared be satisfied with the experiment. In particular, 75% students (M=3.98), (on a five-
point Likert scale, where 1= Strongly Disagree, 5= Strongly Agree), felt such techniques be made an
integral part of the curriculum as they help in putting to test the real objective of a communication
class- confident expression. Performance was found to be better and stress level far lower than that of
in a standard examination, as indicated by a mean of 3.57, perhaps because this technique appeared less
formidable. That this process also gave students an opportunity to learn and to practice better
structuring and expression of thoughts was substantiated by a mean score of 4.07 and 4.12 respectively.
6 LIMITATIONS
The primary constraint in implementing the test effectively across business schools would be the
batch size. If the same exercise were to be carried out in more than one section, lack of a carefully
planned strategy, in the sense of clever division of groups and students so that there is no overlapping
of student evaluators, may affect the impact of the tool. It is highly important for the instructor to
explain the parameters clearly to the students; else, peer assessment could be effected. It is also felt that
the test would be even more effective if the batch size is of around thirty students but this would also
mean less number of student evaluators. Whether or not this reduced number of peer evaluators lead to
play for marks, has yet to be determined. However, further experimentation and subsequent research is
in the process and the outcome of these observations, when tested, would be reported.
7 IMPLICATIONS
One objective of this experiment was that apart from evaluation, the exercise should also enhance
the learning of students. A post-discussion revealed that the objective was largely achieved by way of
students sharing, discussing and listening to various view points on diversified topics. Therefore, the
authors believe that, the experiment, if replicated, should provide reliable results as it seems to be a
win-win situation for both- the evaluator and the participants. The experiment, if replicated
successfully, would help instructors achieve, to a large extent, multi-fold objectives of a class on
communication- improvement of written and non-written skills, evaluation of written and non-written
skills, training students on group discussion, and above all, training them on confident expression.
8 CONCLUSION
The experiment, still in its nascent stage, appears to have the potential of being further modified
and developed into a useful tool of assessment. The correlation between faculty and student scores and
the post feedback of the approach validates not only the above stated fact but also that peer assessment,
if implemented properly, can be a useful tool for student evaluation.
Subrata Chakraborty and Shailja Agarwal
9
REFERENCES
Badenhausen, Kurt; Henderson, Eileen; Kump, Lesley and Stanfl, Robert. (2000) Forbes,Volume.165
Issue 3, pp.100-104.
Bales, R.F. (1970). Personality and Interpersonal Behavior. New York: Holt, Rinehart and Winston.
Broadfoot, Patricia and Black, Paul. (March 2004). Redefining assessment? The first ten years of
assessment in education. Assessment in Education: Principles, Policy & Practice, pp. 7 26.
Cameron, D. (2000) Good to Talk?: Living and Working in a Communication Culture. London: Sage.
Cantor, L.; Roberts, L. and Pratley, B. (1995) A guide to further education in England and Wales
(London, Cassell Education).
Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Review of
Educational Research, Volume 64, No. 1, pp. 135.
Cox, Homer L. (1970). Communication testing practices: A survey of selected universities. Journal of
Business Communication, Volume8, No.1, pp.13-23.
Crooks, Terence J. (1988). Impact of classroom evaluation practices on students. Review of
Educational Research, Volume 58, No. 4, pp. 438-481.
Desrochers Marcie N.; Pusateri Jr, Michael J. and Fink, Herbert C. (October 2007). Game assessment:
Fun as well as effective. Assessment & Evaluation in Higher Education, Volume 32, Issue 5 , pp.
527 539.
Devita, Glauco. (2000). Inclusive approaches to effective communication and active participation in the
multicultural classroom -An international business management context. Active Learning in
Higher Education, Volume 1, No. 2, pp. 168-180.
Fall, Randy and Webb, Noreen M. (2000). Group discussion and large-scale language arts assessment:
effects on students' comprehension. American Educational Research Journal, Volume 37, No. 4,
pp. 911-941.
Feinberg, Susan. (1979). Evaluation of business communication techniques. Journal of Business
Communication, Volume 16, No. 3, pp. 15-30.
Gray, Brendan J., Ottesen, Geir Grundvag; Bell, Jim; Chapman, Cassandra and Whiten, Jemma.
(2007) Marketing Intelligence & Planning, Volume. 25, No. 3, pp. 271-295.
Haigh, Martin. (August 2007). Sustaining learning through assessment: An evaluation of the value of a
weekly class quiz. Assessment & Evaluation in Higher Education, Volume 32, Issue 4, pp. 457
474.
Hancock, Dawson R. (2007). Effects of performance assessment on the achievement and motivation of
graduate students. Active Learning in Higher Education, Volume 8, No. 3, pp. 219-231.
Holmes, J. (1992) Women‟s voices in public contexts. Discourse and Society. Volume3, No.2, pp. 131-
150.
Jacobs, G. and Hannah, D. (2004) Combining cooperative learning with reading aloud by teachers.
International Journal of English Studies, Volume 4, pp. 7118.
Koschmann, T., Kelson, A. C., Feltovich, P. J., and Barrows, H. S. (1996). Computer-supported
problem-based learning: A principled approach to the use of computers in collaborative learning.
In T. Koschmann (Ed.), Computer-supported collaborative learning: Theory and practice of an
emerging paradigm (pp. 83124). Mahwah, New Jersey: Lawrence Erlbaum.
Krashen, S. (1981) Second Language Acquisition and Second Language Learning. Oxford: Pergamon.
Lee, Yekyung in full collaboration with Ertner, Peggy A. (2006). Examining the effect of small group
discussions and question prompts on vicarious learning outcomes. Journal of Research on
Technology in Education, Volume39, No.1, pp.66-80.
Maes, Jeanne D; Weldy, Teresa G. and Icenogle, Marjorie L. (1997). A managerial perspective: oral
communication competency is most important for business students in the workplace. The Journal
of Business Communication, Volume 34.
Mainkar, Avinash V. (Feb2008). A student-empowered system for measuring and weighing
participation in class discussion.Journal of Management Education, Volume 32 Issue 1, pp. 23-37.
Malmqvist, Anita. (2005). How does group discussion in reconstruction tasks affect written language
output. Language Awareness, Volume14, No. 2&3.
Int. Journal of Business Science and Applied Management / Business-and-Management.org
10
McKenney, James L. (July 1962). An Evaluation of Business Game in an MBA Curriculum. JSTOR:
The Journal of Business, Volume 35, No.3, pp. 278-286.
Mercer, N. (1995) The Guided Construction of Knowledge. Talk amongst Teachers and Learners.
Clevedon: Multilingual Matters.
Miragua, Joseph F. (1964). Communication network research and group discussion. Today‟s Speech,
Volume12, No.4.
Myers, Greg. (2007). Enabling talk: How the facilitator shapes a focus group Text & Talk; Volume 27
Issue 1, pp. 79-105.
Ogunleye, James. (February 2006). A review and analysis of assessment objectives of academic and
vocational qualifications in English further education, with particular reference to creativity.
Journal of Education & Work, Volume19, No.1, pp. 91-204.
Reese, Curt and Wells,Terri. (December 2007). Teaching academic discussion skills with a card game.
Simulation & Gaming, Volume 38, No. 4, pp. 546-555.
Slavin, R. E. (1996). Research on cooperative learning and achievement: What we know, what we need
to know. Contemporary Educational Psychology, Volume 21, pp.4369.
Storch, N. (2002) Patterns of interaction in ESL pair work. Language Learning, Volume 52 No.1,
pp.11955.
Swain, M. (1985) Communicative competence: Some roles of comprehensible input and
comprehensible output in its development. In S.M. Gass and C.G. Madden (eds) Input in Second
Language Acquisition (pp. 23553). Rowley, MA: Newbury House.
Swann, Joan. (2007). Designing „Educationally Effective‟ discussion. Language & Education, Volume
21, No.4.
Thorpe, Mary. (2000). Encouraging students to reflect as part of the assignment process: Student
responses and tutor feedback. Active Learning in Higher Education, Volume 1, No. 1, pp. 79-92.
Subrata Chakraborty and Shailja Agarwal
11
APPENDICES
Appendix 1
Figure 1: Assessment Sheet
Name
of
Student
Participation
(3 marks)
Listenin
g
(3marks)
Speakin
g
(3marks)
Body
Language/
Voice
Modulatio
n
(3 marks)
Content
organization
, Flow
(3 marks)
Emotional
Projection,
Sincerity,
Respect,
Confidenc
e, Timing
(3 marks)
Total
(20
marks)
Name of Peer Assessor: _______________________.
Date: ________________.
Appendix 2
Table 1: Descriptives
Peer Assessment
Faculty Assessment
N
Valid
57
57
Missing
0
0
Mean
3.63
3.49
Std. Deviation
.56
.71
Coefficient of variation
16%
21%
Appendix 3
Table 2: Significance of difference between means of Peer and Faculty Assessment
Paired Differences
t
df
Mean
Std.
Deviation
Std.
Error
Mean
95% Confidence
Interval of the
Difference
Mean
Std.
Deviati
on
Lower
Upper
Lower
Upper
Lower
Upper
Lower
Pair 1
Peer
assessment
Faculty
Assessment
.37719
1.74561
.23121
-.08598
.84037
1.631
56
Appendix 4
Table 3: Correlations
Peer Assessment
Faculty Assessment
Peer Assessment
Pearson Correlation
1
.56 (**)
Sig. (2-tailed)
.000
N
57
57
Faculty Assessment
Pearson Correlation
.56 (**)
1
Sig. (2-tailed)
.000
N
57
57
** Correlation is significant at the 0.00 level (2-tailed).
Int. Journal of Business Science and Applied Management / Business-and-Management.org
12
Appendix 5
The seven groups (1
st
column from the left) were- G-1, G-2, G-3, G-4, G-5, G-6 and G-7. While
one group would participate in the group discussion, all the other members of the other six groups were
required to evaluate one member per group on pre-set parameters (as shown in peer evaluation column
below). For e.g. member 1 from G-1 would be evaluated by member 9 from G-2, member 17 from G-3,
member 25 from G-4, member 33 from G-5, member 41 from G-6 and member 49 from G-7.
PEER EVALUATION
Groups
1
2
3
4
5
6
G-1
1
9
17
25
33
41
49
2
10
18
26
34
42
50
3
11
19
27
35
43
51
4
12
20
28
36
44
52
5
13
21
29
37
45
53
6
14
22
30
38
46
54
7
15
23
31
39
47
55
8
16
24
32
40
48
56
G-2
9
8
24
25
33
41
49
10
7
23
26
34
42
50
11
6
22
27
35
43
51
12
5
21
28
36
44
52
13
4
20
29
37
45
53
14
3
19
30
38
46
54
15
2
18
31
39
47
55
16
1
17
32
40
48
56
57
8
17
31
36
30
23
G-3
17
5
9
32
33
41
49
18
4
10
31
34
42
50
19
3
11
30
35
43
51
20
1
12
29
36
44
52
21
2
13
28
37
45
53
22
6
14
27
38
46
54
23
7
15
26
39
47
55
24
8
16
25
40
48
56
G-4
25
8
14
17
33
41
49
26
6
15
18
34
42
50
27
7
16
19
35
43
51
28
5
11
20
36
44
52
29
3
12
21
37
45
53
30
4
13
22
38
46
54
31
2
9
23
39
47
55
32
1
10
24
40
48
56
G-5
33
4
16
20
28
48
50
34
3
9
22
29
47
52
35
5
15
24
32
46
51
36
1
10
18
31
45
56
37
2
14
21
27
44
54
38
6
11
19
25
43
55
39
7
13
17
26
42
53
40
3
12
23
30
41
49
G-6
41
2
10
21
26
33
49
42
5
12
24
28
34
50
43
7
14
19
30
35
51
44
6
16
17
32
36
52
45
4
9
18
25
37
53
46
3
11
20
27
38
54
Subrata Chakraborty and Shailja Agarwal
13
47
8
13
23
29
39
55
48
1
15
22
31
40
56
G-7
49
7
13
20
29
33
42
50
1
16
24
28
34
44
51
8
14
19
32
35
46
52
2
12
21
31
36
48
53
6
9
17
25
37
41
54
3
11
23
26
38
43
55
5
10
18
30
39
45
56
4
15
22
27
40
47