Soft skills and key competences - SWITCH · Soft%skills/Key!competences)e"assessment!...
Transcript of Soft skills and key competences - SWITCH · Soft%skills/Key!competences)e"assessment!...
WP7.2 Competence-‐oriented e-‐assessment perspectives
Individual contribution to D7.2.3 from UniGe
University of Geneva,
Laurent Moccozet
Sept. 30, 2013
Soft skills/Key competences e-‐assessment Soft skills & key competences The words skill and competence are very often used interchangeably. However, they do have different definitions and a competence is considered as broader than a skill[1]: a competence is considered as including several skills. In their effort to define a typology of knowledge, skills and competences[2], Winterton et al. give a definition of key competences: “Key competences are context-‐independent, applicable and effective across different institutional settings, occupations and tasks. These typically include basal competences, such as literacy, numeracy, general education; methodological competences, like problem solving, IT skills; communication skills, including writing and presentation skills; and judgment competences, such as critical thinking.”[2]
Key competences are also referred as “transversal skills” or even “soft skills”. However, soft skills may have a broader or sometimes slightly different meaning. Soft skills encompass two categories of skills [3], [4]:
1) Personal qualities, 2) Interpersonal skills.
They include for example: Communication skills; Critical and structured thinking; Problem solving skills; Creativity; Teamwork capability; Negotiating skills; Self-‐management; Time management; Conflict management; Cultural awareness; Common knowledge; Responsibility; Etiquette and good manners; Courtesy; Self-‐esteem; Sociability; Integrity / Honesty; Empathy; Work ethic; Project management and Business management[3].
Soft skills are often related to employability and life long learning. In [4], the author indicates that employers are expecting to hire employees with strong soft skills. Soft skills can vary from one sector to another[3]. Soft skills are also associated to informal learning whereas hard or technical skills are associated to formal learning [4]. Soft skills are also important for the students during their studies. The key issues for students are therefore: how can they acquire/improve these soft skills or key competences? How can they assess and evaluate their acquisition of these soft skills and how can they demonstrate that they master these soft skills? It also creates a challenge for universities and higher schools: how can they assist students with respect to soft skills.
According to the study described in [4], the top ten soft skills that business executives are expecting from their employees are:
1) Communication – oral, speaking capability, written, presenting, listening 2) Courtesy – manners, etiquette, business etiquette, gracious, says please and
thank you, respectful 3) Flexibility – adaptability, willing to change, lifelong learner, accepts new things,
adjusts, teachable 4) Integrity – honest, ethical, high morals, has personal values, does what’s right 5) Interpersonal Skills – nice, personable, sense of humor, friendly, nurturing,
empathetic, has self-‐control, patient, sociability, warmth, social skills
6) Positive Attitude – optimistic, enthusiastic, encouraging, happy, confident 7) Professionalism – business like, well-‐dressed, appearance, poised 8) Responsibility – accountable, reliable, gets the job done, resourceful, self-‐
disciplined, wants to do well, conscientious, common sense 9) Teamwork – cooperative, gets along with others, agreeable, supportive, helpful,
collaborative 10) Work Ethic – hard working, willing to work, loyal, initiative, self-‐motivated, on
time, good attendance
The difficulty associated to transversal competences is that they can be learned anywhere at any time both in formal and informal contexts and they can be applied anywhere at any time. This situation considerably increases the difficulty to assess them. When can they be assessed? How can they be assessed?
The European Reference Framework The European Reference Framework[5] is one attempt to define a standard set of soft skills or key competences. It describes eight key competencies for life long learning. As raised on the framework web page: « The European framework for key competences for lifelong learning, released at the end of 2006, identifies and defines the key abilities and knowledge that everyone needs in order to achieve employment, personal fulfilment, social inclusion and active citizenship in today's rapidly-‐changing world. »
The Reference Framework sets out eight key competences:
1) Communication in the mother tongue; 2) Communication in foreign languages; 3) Mathematical competence and basic competences in science and technology; 4) Digital competence; 5) Learning to learn; 6) Social and civic competences; 7) Sense of initiative and entrepreneurship; 8) Cultural awareness and expression.
Among these eight competences, we particularly identify two of them: digital competences and learning to learn. They are directly related to learning and therefore immediately useful for students (although they will still be useful for their professional life).
The digital competence is defined as follows: “Digital competence involves the confident and critical use of Information Society Technology (IST) for work, leisure and communication. It is underpinned by basic skills in ICT: the use of computers to retrieve, assess, store, produce, present and exchange information, and to communicate and participate in collaborative networks via the Internet. »[5]
The learning to learn competence is defined as follows: "Learning to learn is the ability to pursue and persist in learning, to organise one’s own learning, including through effective management of time and information, both individually and in groups. This competence includes awareness of one’s learning process and needs, identifying available opportunities, and the ability to overcome obstacles in order to learn successfully. This competence means gaining, processing and assimilating new knowledge and skills as well as seeking and making use of guidance. Learning to learn engages learners to build on prior learning and
life experiences in order to use and apply knowledge and skills in a variety of contexts: at home, at work, in education and training. Motivation and confidence are crucial to an individual’s competence."[5]
This second competence relies on the acquisition of a set of prerequisite basic skills such as literacy, numeracy and ICT skills.
The report “The Use of ICT for the Assessment of Key Competences" [6] provides an extensive and precise picture of the ICT landscape for key competences assessment. The author identifies four generations of competence e-‐assessment strategies based on different usages of ICT:
• Generation 1: computerized testing • Generation 2: Computerized adaptive testing • Generation 3: Continuous measurement • Generation 4: Intelligent measurement
Generation 1 simply consists in the automation of the administration and scoring of conventional tests. Generation 2 just brings a minor an improvement by adapting the behaviour of the tests according to each student answers.
The two last generations are heavily relying on learning analytics. As learning activities are more and more performed inside virtual learning environments, it is possible to continuously log, monitor, mine and analyse students’ learning activities. The resulting data potentially allows embedding competence assessment inside learning.
We are currently at the border between these Generation 1 and 2 (computer-‐based assessment) and Generation 3 and 4 (embedded assessment).
The report identifies different families of technologies for assessing key competences:
• Computer based assessment • Quizzes and simples games • ePortfolios • Peer assessment • Self assessment • Virtual world games • Simulations • Intelligent tutors
Each family can be applied for certain types of key competences and for certain purposes: diagnostic, formative and summative. The figure 1 (reproduced from [7]) summarizes for each key competence, which type of technology can be used and for which purpose. In the table, the learning to learn competence can be assessed either by 1) self assessment for diagnostic/formative/summative purposes and 2) peer assessment for formative/summative purposes and the digital literacy competence can be assessed by simulations for diagnostic/formative/summative purposes.
As raised in [6], “the very nature of digital competence invites for technology-‐based assessment formats. However, many of the most currently used assessment tools for digital competence employ a knowledge-‐based, traditional multiple choice format." This last format of assessment is rather focusing knowledge than skills. The report describes few
experimental setups and pilots such as iSkills1, where students are really immersed inside a digital environment where they have to perform real tasks and activities.
Figure 1 -‐ Overview of the potential of different ICT-‐based tools for the assessment of Key Competences
(reproduced from [7])
Skills assessment in professional social networks Companies are addressing the issue of skills and competences and their assessment since a long time. Altough the context is it seems interesting to evaluate how professional social networks handle skills and their assessment. The main difference is that on professional social networks, users are all expected to be experts whereas for students they are currently learners.
LinkedIn2 and ResearchGate3 are two social networks dedicated to professional activities. ResearchGate is specialized for scientists whereas LinkedIn is general and covers all sectors. These two social networks have introduced the notion of skills and competences with some simple assessment schemas. The assessment processes are very close for both networks.
The proposed schemas confirm what is depicted in Figure 1: they are based on a mixture of self and peer-‐assessment. Each user is simply claiming his/her skills by tagging his/her profile with a list of skill tags. The assessment is achieved by the peers who endorse the skills. The validation of the skills is crowdsourced: the more peers are endorsing one of someone’s skills, the more this particular skill is validated and can be trusted. It is also validated by the own skills of the endorsers. The skills are displayed in the user’s profile as a tag clouds. For each skill, the number of endorsements is indicated,
1 http://www.ets.org/iskills/ 2 http://www.linkedin.com/ 2 http://www.linkedin.com/ 3 http://www.researchgate.net/
either as a number or by showing the list of peers who have endorsed it. In ResearchGate, a peer can add a skill to the list of a user’s skills, and endorse it. In LinkedIn, a user can write a recommendation for a peer, but the recommendation applies globally for the peer and not for a specific skill. Figure 2 and Figure 3 show how skills assessment is managed for both LinkedIn and ResearchGate.
Figure 2 – Display of user’s skills and competences in LinkedIn
Figure 3 – Display of user’s skills and competences and skills endorsement in ResearchGate
Cases studies
1) Digital competence @ Unige: Calis SES http://www.unige.ch/biblio/ses/calis/
Computer-‐Assisted Learning Information Searching (CALIS) is an online tutorial available at the University of Geneva. There exist different versions: CALIS SES at the Faculty of Economic and Social Sciences, CALIS Sciences at the Faculty of Sciences. Each version has been adapted to the areas covered by each faculty. At the Faculty of Economic and Social Sciences, the module is integrated into the bachelor curriculum and leads to a formal online assessment. CALIS is implemented inside the Moodle platform at the University.
An open version is available at https://moodle.unige.ch/course/view.php?id=337
The tutorial is made of four modules: elaborate information retrieval; exploit resources; cite resources, present a bibliography. Each module integrates the theory with examples; resources and demonstrations; exercises and self-‐evaluation quizzes.
The assessment is an online quiz composed of 50 questions. The quiz is implemented inside the Moodle platform at the University (as CALIS). The quiz is opened once each semester (with a second chance for the students who failed the first trial). It is opened during two weeks; students have 50 minutes to complete it and they can perform it only once.
The results are collected by the librarians and submitted to the professors in charge of the bachelor who decide if students have succeeded or failed according to their performance at the quiz. The result of the assessment is a fail/pass decision. Students can freely follow the tutorial during the semester. It is mandatory to have passed the assessment to be able to start the bachelor project.
Quizzes/multiple-‐choice questionnaire sound to be the most popular online methods to assess the information literacy skill. This is confirmed by the survey of assessment methods available in [11]: 34% of the collected case studies are multiple choices questionnaire and 15% are quizzes or tests. This can also be verified from the online list of information literacy assessments resources collected by J. Muller and available at:
http://jfmueller.faculty.noctrl.edu/infolitassessments.htm
Many online assessments services are also based on quiz/MCQ:
• TRAILS (Tool for Real-‐Time Assessment of Information Literacy Skills): http://www.trails-‐9.org/.
• ISST (Information Seeking Skills Test) is a web-‐based test of 53 multiple-‐choice items.
• ILT (Information Literacy Test) is a 60-‐item multiple-‐choice test. • SAILS (Standardized Assessment of Information Literacy Skills) is a knowledge
test with multiple-‐choice questions targeting a variety of information literacy skills: https://www.projectsails.org/.
As raised in [13], MCQ are usually limited to assess low-‐order thinking skills whereas information literacy requires high-‐order thinking skills. It can be argued that in many
cases the choice for MCQs is driven by constraints such as the ease to administer and the rapid availability of results. Another reason for this situation is probably linked to the fact that in most of the case studies, information literacy does not bring credits. Therefore the assessment is not considered as strictly as for other disciplines.
We have also opened a question on ResearchGate: “Methodologies and tools for competence-‐based e-‐assessment of information literacy skills? Is your University teaching information literacy? How is it assessed? Methodology? ICT tools?" Currently 9 answers have been collected. They can be reviewed at :
https://www.researchgate.net/post/Methodologies_and_tools_for_competence-‐based_e-‐assessment_of_information_literacy_skills
Although the number of answers is limited, they confirm the survey of assessment methods in [11]: it appears that peer and self-‐assessment based on MCQ and quizzes are quite popular and widespread.
The review of instruments provided in [13] categorize assessment methods according to two axis: objective, interpretive and hybrid methods for the first one and cognitive, performance-‐based and hybrid for the second one. From the point of view of e-‐assessment, it results that objective methods are more prone for automation than interpretive ones. For objective methods such as MCQs, the assessment and the results can be directly produced with IT-‐based tools, whereas with interpretive methods such as bibliography analysis, IT tools can only support to some steps of the assessment. The interpretation and therefore the grading remain in the hands of the teachers. The authors conclude their evaluation by advocating for a multiple methods approach that would be able to cover all the dimensions involved in information literacy.
Significant alternatives assessment methods [11] are mainly based on self-‐assessment, analysis of bibliographies and portfolio [12]. Portfolios-‐base strategies seem to bring various advantages such as a close collaboration and integration between teachers and librarians. However, portfolios seem to be possible only for rather small groups of students.
It appears that apart from the technical aspect of assessment, a potentially successful approach consists in establishing collaborative frameworks between instructors and librarians. In [14], a librarian-‐instructor collaborative framework is proposed. The described case study is based on two assessment tools: an information literacy inventory and a learning outcomes checklist. Similar approaches can be supported with a collaborative social platform with portfolio-‐like features.
2) Learning to learn @ Unige: Ateliers “réussir ses etudes” http://www.unige.ch/dife/reussir/Programmereussir.html
The University of Geneva offers a program called “réussir ses etudes”. This program is voluntary based. Students are offered a few services among which they can attend a series of four workshops. These workshops are organized into different topics around the “learning to learn” competence.
Figure 4 – Home page of the “réussir ses études” workshops at the University of Geneva
There are four workshops in the program « réussir ses études »: 1) taking notes; 2) time management; 3) preparing for examinations; and 4) memorisation strategies. Each workshop is organized in two sessions of 90 minutes each. Students can register freely at any of the workshops. Each year around 350 students attend the program.
The skills assessment is processed in two steps: a first diagnostic step and then a final formative step. At the two steps, they are asked to fill in a questioner based on the Learning and Study Strategies Inventory (LASSI)[8]: “The LASSI is a 10-‐scale, 80-‐item assessment of students' awareness about and use of learning and study strategies related to skill, will and self-‐regulation components of strategic learning. … The LASSI is both diagnostic and prescriptive.”4. Students fill the questioner twice, before the workshops and after the workshops. The workshops tutor gives them a personalized feedback based on the comparison of the two questioners. The questioners are submitted as Excel spread sheets that are exchanged by emails between the tutor and the students. The questioner could be moved to an online system in order to get immediate results and to allow synthesising and analysing the results.
Very few universities offer that kind of workshop. There is currently no specific e-‐assessment framework proposed to assess the learning to learn competence.
The evaluation approach is similar to the one often applied by companies. In companies, skills evaluations are often achieved through self and peer-‐assessment. The self and peer feedback is collected through standardized questionners.
Proposed framework In this section we propose some components of a framework that could address the assessment of transversal skills in the Universities.
4 http://www.hhpublishing.com/_assessments/LASSI/
Badges: numerical evidence of skills
The Mozilla open badge framework The open badge framework opens the way to facilitate the recognition for skills, competences and achievement. Any organization or community can issue badges corresponding to the validation of a skill, competence or achievement. Learners can collect badges from different places and display them according to their needs. These badges are not simply pictures that can be displayed on a web page. The mechanism is able to certify the badge with the issuer.
Figure 5 – Management of skills through the open badge framework (from https://wiki.mozilla.org/Badges)
Figure 6 – The Mozilla open badge infrastructure
Any badge issuer can award certified badges to learners. The learners can collect and manage their badges in a badge backpack. This allows displaying skills and achievements across a range of different display sites: personal resume or web site, social networking profiles, and employment sites. Mozilla offers the infrastructure to deploy the open badge framework. This is up to the issuers to define what the badge corresponds to and how the badge is earned.
Meta-‐badges The main idea of the open badge framework is to allow collecting certified numerical evidence of competences through among the various places anyone can learn, train and acquire competences (these places can be for formal or informal learning). If we consider competences such as information literacy or learning to learn inside an institution, the acquisition and validation process is transversal to the different formal learning activities. For example, regarding information literacy at the University of Geneva, a student will learn information literacy during the online tutoring module CALIS, but he/she will apply it during the writing of a term paper. Therefore, a competence such as information literacy is not issued once in a place, it is the result of multiple issues in different places. This mechanism corresponds to the idea of meta-‐badge proposed by Mozilla for the open badge framework: “Multiple motivational badges or certification badges may be aggregated into higher-‐level ‘meta’ badges that represent more complex literacies or competencies. It may be that these meta-‐badges are developed top-‐down, created and issued by organizations to target specific sets of skills, or bottom-‐up, as reflections and narratives around sets of badges important within a certain community or for a particular individual. Badges give us the flexibility to support learning innovation, recognize skills and achievements at multiple stages and granularities and create
taxonomies of achievement that help people discover learning opportunities and extend the value of that learning." Following this idea, it should be possible to define a dedicated hub server at the level of an institution that would be in charge of managing the meta-‐badges. Each meta-‐badge certifies the acquisition of a competence such as information literacy inside the institution. This acquisition is obtained by accumulating sub-‐badges that corresponds to the performance of activities that are part of the global competence.
There could be a “competence dashboard” showing students the list of possible competences they can acquire inside their institution, how they can validate these competences and their current status with respect to the competences they are attempting to validate. There would be not just one way to acquire and validate a competence, but many. This could define a transversal learning path between different courses, seminars and workshops but also projects, personal or group works…
Figure 7 – Meta-‐badges management inside the issuing institution
Figure 7 gives an idea of the global process: a student can consult his/her competence dashboard. She/He checks that she/he can validate the competence C1 according to two learning paths. She/He decides to pass a quiz (1.1) and to submit a project (2.1). She/He validates both and gets the corresponding sub-‐badges (1.2 and 2.2). These information are collected by the meta-‐badge manager. The meta-‐badge manager adds both sub-‐badges and validate the acquisition of the competence. The corresponding badge is added to the student’s dashboard and the institution issues the badge that the student can add in her/his badge backpack. She/he can further display it on her/his LinkedIn home page for example.
Peer endorsement of skills The approach adopted in professional social networks such as ResearchGate or LinkedIn sounds interesting. However, it cannot be applied without adaptation to the learning context. First this approach requires the availability of a social learning platform where students can share and exchange their productions. Members of professional social networks are all experts in their area. They are therefore able to assess and evaluate the contributions of their peers. Students cannot be considered as experts. Therefore, the quality of peer endorsement of skills is questioned. Other usual drawbacks of peer assessment are also valid, such as the possibility that some students organize themselves to cheat and “cross-‐endorse” skills. Some directions would require investigating cross-‐level endorsement where only advanced students can endorse skills of less advanced students. This is close to some kind of mentoring/tutoring system. This approach requires also to train students to assess and evaluate skills.
Data mining and learning analytics Learning analytics is usually defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”5. Data about learners are getting easier and easier to access with the advent and generalization of virtual learning environments in high schools and universities: LMS, CMS, e-‐Portfolio, MOOCs…
There are probably some competence assessment metrics that can be devised from learning analytics. These metrics could be then applied individually for each student in combination with an open badge framework as proposed above. As more and more institutions deploy multiple learning environments: one or more LMSs, an e-‐Portfolio, a social platform… there is a need to harvest all the logs in order to aggregate all individual students’ activities through these different platforms. This introduces privacy issues regarding the data that would be collected about students.
A lot of learning/training activities are taking place outside the institutional virtual learning environments: in the Web 2.0 or locally in their personal computer. These activities escape from the institutional environment and therefore cannot be included in a learning analytics based assessment process. The Web 2.0 and the local resources that students are managing define their Personal Learning Environment (to which can be added the institutional resources they have access to). Students can also demonstrate competences and skills during their informal learning activities. A dedicated space could be defined inside the institution where students could voluntary submit some proof and evidence of skills achievements. This space would be somehow similar to an e-‐Portfolio.
5 https://tekri.athabascau.ca/analytics/
Students could then request a teacher or assistant to assess their submission, which could then lead to the validation in all or part of the corresponding skill.
References
[1] B. Hoskins et R. D. Crick, « Competences for Learning to Learn and Active Citizenship: different currencies or two sides of the same coin? », European Journal of Education, vol. 45, no 1, p. 121–137, 2010.
[2] J. Winterton, F. Delamare - Le Deist, et E. Stringfellow, « Typology of knowledge, skills and competences », CEDEFOP, TI-73-05-526-EN-C, 2009.
[3] B. Schulz, « The importance of soft skills: Education beyond academic knowledge. », Nawa Journal of Communication, vol. 2, no 1, p. 146-154, 2008.
[4] M. M. Robles, « Executive Perceptions of the Top 10 Soft Skills Needed in Today’s Workplace », Business Communication Quarterly, vol. 75, no 4, p. 453-465, déc. 2012.
[5] « European Commission - The European framework for key competences ». [En ligne]. Disponible sur: http://ec.europa.eu/education/lifelong-learning-policy/key_en.htm. [Consulté le: 28-mai-2013].
[6] C. Redecker, « The Use of ICT for the Assessment of Key Competences », JRC Scientific and Policy Reports, EUR 25891, 2013.
[7] C. Redecker et Y. Punie, « How to use ICT for the Assessment of Key Competence », présenté à Thematic Working Group on the Assessment of Key Competences Meeting, Brussels, 2011.
[8] C. E. Weinstein, D. R. Palmer & A. C. Shulte (2002) (Eds). Learning and Study Strategies Inventory. Clearwater, FL: H&H Publishing Company.
[9] The Mozilla Foundation et The P2P University, « Open Badges for Lifelong Learning ». 2012. [10] J. Liebowitz et C. Y. Suen, « Developing knowledge management metrics for measuring
intellectual capital », Journal of Intellectual Capital, vol. 1, no 1, p. 54-67, mars 2000. [11] Walsh, A. (2009). Information literacy assessment Where do we start? Journal of Librarianship and
Information Science, 41(1), 19–28. [12] Sonley, V., Turner, D., Myer, S., & Cotton, Y. (2007). Information literacy assessment by portfolio: a
case study. Reference Services Review, 35(1), 41–70. Beile, P. (20080303). [13] Information Literacy Assessment: A Review of Objective and Interpretive Measures. Society for
Information Technology & Teacher Education International Conference 2008, 2008(1), 1860-1867. [14] Cooney, M., & Hiris, L. (2003). Integrating information literacy and its assessment into a graduate
business course: A collaborative framework. Research Strategies, 19(3–4), 213-232.