Using evaluation criteria and rubrics as learning tools in subtitling for the D/deaf and the hard of hearing

ABSTRACTThis paper focuses on the use of evaluation criteria and rubrics as tools for training in an audiovisual translation mode: subtitling for the D/deaf and the hard of hearing (SDH). Following...


Introduction
The study of translation training inevitably involves the study of assessment, as it is an essential element in any teaching and learning process. To date, most of the proposals regarding assessment in Translation Studies (TS) focus on summative and product assessment (Waddington 2000 and Galán-Mañas and Hurtado Albir 2015 provide excellent overviews). This focus on summative assessment has led to diagnostic and formative assessment being under researched and limiting the understanding of how this type of assessment manages correcting translations. The analysis of translation errors has been a long-established practice in TS with many assessment proposals being limited to either translation error categories (e.g. Delisle 1993) or to correction categories and weighting criteria (e.g. Hurtado Albir 1999;Adab 2000).
In the last few decades there has been a shift in how assessment is perceived in translation training to focus on the processes involved and not just on the end product.
Current pedagogical approaches advocate competence-based training, in which assessment is an essential element of curricular design that is directly linked to learning objectives, competences and tasks.
Under this approach, assessment becomes an essential tool for trainers, who can evaluate how students progressively develop competences through a variety of tasks 1 and assessment instrumentsincluding diagnostic questionnaires, surveys of translation knowledge, reflective diaries, translations with commentaries, portfolios, rubrics, recordings of professionals or students translating (Galán Mañas and Hurtado Albir 2015, 70-72;Huertas Barros and Vine 2015, 24).
Assessment is also a useful tool for students, who can play an active role in their own learning, perform assessment themselves (self-and peer-assessment) and be more aware of their learning process, their strengths and weaknesses. Although both academia and industry are investing much effort in designing models to assess and assure the quality of the translation process and product, assessment is a broad and complex area in which much more remains to be done.
On the one hand, more proposals for assessments which are specifically designed for training settings are needed. Teaching-oriented assessment proposals 'are few and far between' (Galán-Mañas and Hurtado Albir 2015, 67), which given the complexity of measuring translation competence (Beeby 2000;Hurtado Albir 2017) might be expected.
Also, more empirical studies of assessment instruments and evaluation criteria are needed. There is still much debate about how subjective the application of evaluation criteria to assess translations can be, and 'finding valid assessment criteria and making more objective judgements about what a good or bad translation is remains a complex matter' (Huertas Barros and Vine 2015, 23).
Within the translation industry, assessment mainly concentrates on the end product and revolves around the concept of 'quality assurance' (QA), which consists in 'the correction and/or amendment of a professional translation that is carried out by someone other than the translator before the translation is delivered to the customer' (Rasmussen and Schjoldager 2011, 92). National and international standards (e.g. ISO 17100:2015) and best practice guides outline the processes for producing qualityassured translations. Metrics and tools to assess the quality of translations can also be found in the translation sector (Sánchez-Gijón 2014).
However, conceptions of quality often differ between academia and industry, leading to debates about how to 'bridge the gap' between the two conceptions. We need more specific studies on quality criteria and assessment processes from both sectors to be more informed about how these criteria and processes are performed in both settings. This paper investigates assessment practices in subtitling for the D/deaf and the hard of hearing (SDH), which is a form of subtitling that includes dialogue and text on screen and that integrates those elements that the D/deaf or hearing-impaired audience cannot easily perceive (see Section 2). Our research on assessment practices both in the industry and academia is a first step to design a set of instruments that could be implemented in both sectors.
To this end, four objectives are set out: (1) To investigate the quality assurance processes and instruments in the SDH industry in Spain.
(2) To investigate the assessment processes and instruments for SDH training in Spanish universities.
(3) To design a set of evaluation criteria and a rubric which could be used in academia and industry.
(4) To elaborate interrelated and graduated tasks in which these instruments may be actively used by students.
Following this introduction to the current conception of and research carried out on assessment in translation, sections 2 and 3 focus specifically on SDH and on assessment processes and instruments used in SDH university training and in the professional sector. In section 3, a case study carried out to gain a better understanding of assessment practices in academia and industry in Spain is presented. In section 4, two assessment instruments are proposed on the basis of the data gathered, the Spanish SDH standards and a competence-based training approach. These assessment instruments include a set of evaluation criteria (i.e. a list of correction criteria and the description of the potential errors that can be categorised under each criterion) and a rubric (i.e. a scoring guide with elements to be appraised and performance levels). Section 5 suggests a number of interrelated and graduated tasks for students to actively engage with them in their own learning process. The tasks outlined in this section are aimed at helping the students fully understand the assessment criteria and the marks given and being able to apply these to their own work or that of their peers. Finally, conclusions are drawn, and further research avenues are suggested.

Subtitling for the D/deaf and the hard of hearing
Subtitling for the D/deaf and the hard of hearing is a form of subtitling that integrates those elements that the D/deaf or hearing-impaired audience cannot easily perceive, such as contextual information, sound effects, and music with the dialogue and text on screen (Díaz Cintas 2006, 6).
Over the last two decades the increasing awareness of issues of equality for all citizens, and the efforts of governments to ensure equal access to information, along with the greater flexibility of audiovisual media and formats, have boosted the visibility and the professional and academic development of SDH.
Since its appearance in 1973 on American television, SDH has continued to evolve and grow, and many countries already offer SDH services not only on television but also on DVD, in cinemas, theatres or opera houses. Therefore, according to their mode of production, subtitles can be categorised as:  Live, if they 'are created (sometimes using speech recognition technology) and cued, with a delay, during transmission' (Gottlieb 2015, 19).
 Pre-recorded, when they are created and/or cued by the subtitler before broadcast.
Although this communication support service has traditionally been intralingual (the first subtitles for the deaf and the hard of hearing on TV were intralingual, Zárate 2014, 41) and much SDH continues to be so, it is also possible to find interlingual SDH, as pointed out by Neves (2008, 185 In recent years, the number of subtitled programmes has steadily risen. For example, since 2008, the BBC subtitles 100% of its programmes on its main channels. 3 And even in those countries with no specific quota, like Germany, an increase in the quantity of subtitles is evident. German ARD channel, for example, subtitled 16. The academic world has responded to this exponential growth of SDH provision and in recent decades training in SDH has rapidly been introduced in universities. The urgent need to train future professionals in the field of SDH has been felt in many countries, and universities around the world, in countries such as Spain, the United Kingdom, France, Belgium, Portugal or Italy have begun to offer SDH training both at undergraduate and postgraduate level on their translation and interpreting courses. The focus of SDH academic research has been on a range of topics including the history and evolution of SDH in different countries, the description of SDH practices and conventions or the study of viewers' preferences (Díaz Cintas, Orero, and Remael 2007;Jiménez Hurtado 2007;Matamala and Orero 2010;. In addition, the growing interest in the study of SDH has led to a number of PhD dissertations being published on SDH, ranging, for instance, from studies of SDH parameters (Neves 2005;Arnáiz 2012) to reception studies with children (Zárate 2014;Tamayo Masero 2015).
The European Union has funded research on media accessibility, encouraging academic partners, broadcasters, research institutes and SMEs to work together. SDH has been explored in the following research projects: DigitalTV4All

Assessment in SDH
In order to ensure not only quantity but also quality in SDH for pre-recorded, semi-live and live TV programmes, sets of guidelines and quality standards have been developed in different countries by a range of institutions including D/deaf associations, broadcasters, regulators, companies, universities. In Spain, the basic standards for the provision of SDH have been set out in the standard UNE 153010:2012 Subtitulado para personas sordas y personas con discapacidad auditiva (AENOR 2012). In the UK the Code on Television Access Services (Ofcom 2015)  There is no doubt that quality assurance in SDH, as with QA in subtitling for hearing audiences, is necessary to ensure the good reception of subtitles by the audience. Despite time pressure, which is common in the television industry, QA is also important for subtitling companies not only due to additional costs if the subtitled product is returned, but also because of a potential loss of professional credibility (James 2001, 155).
However, despite the exponential growth of the profession, the drafting of SDH standards and the increase of training courses and research, there is not much information to date on how QA takes place in the SDH industry or how SDH is assessed in graduate and postgraduate courses.
Drawing on the results of the previously mentioned project DTV4ALL,  has assessed the quality of SDH in Europe as a combination of three factors: what viewers think about SDH, how they understand the subtitles and how they view them. Research on QA in SDH has mainly focused on live (and semi-live) SDH, carried out by different methods such as stenotype or re-speaking. 6 Initially the WER

Assessment in SDH in Spain: a case study
As previously stated, despite the growth of the profession, the increase of training and research in the field of SDH and the drafting of quality standards, there is still little information on how assessment practices are carried out. Therefore, to be able to design a meaningful assessment proposal, we undertook a research project in Spanish professional and academic settings to gain a better understanding on how assessment is being carried out.
Two different online questionnaires were designed and distributed among SDH service providers and SDH university trainers in Spain in October 2016. Both questionnaires 8 were anonymous and included 12-15 open-ended and closed-ended questions. The questionnaires incorporated some general questions about the company or the training course, and also some specific questions about assessment procedures and instruments within both these settings.
Respondents were recruited by email. We contacted universities and companies working in the field of SDH in Spain, and we gathered answers from five SDH service providers and 12 universities 9 offering specific training in SDH. Although this may seem a small sample size, the total population of possible respondents was not significantly larger. We gathered data from 12 universities offering SDH training 10 and five of the nine most representative providers of SDH in Spain, i.e. providers with the higher annual work volume.
In the following sections, we present and interpret the quantitative and qualitative data gathered in the two questionnaires.

Quality assurance in Spanish SDH providers
Five SDH service providers based in Spain answered the questionnaire. To contextualise their relevance in the Spanish SDH industry, they were asked about the percentage of SDH in their annual work volume; three of them indicated that SDH commissions exceeded 75% of their work volume. One service provider stated that SDH was less than 25% of their work volume and another one estimated it to be between 25 and 49%.
Two service providers have no in-house subtitlers and always work with freelancers (1 to 5 in one case and 11 to 20 in another case). The other three providers have between 1 to 5 in-house subtitlers and also work with freelancers (two have between 11 to 20 collaborators and the other provider uses more than 20 freelance subtitlers). As shown in table 1, most service providers (A, B, C & D) offer prerecorded SDH for TV, but only one (B) also subtitles live and semi-live.  All five service providers state that quality assurance is part of the process of SDH. The people in charge of carrying out QA vary. Two service providers indicate that QA is performed in-house (either by one in-house subtitler or by the project managers). Four service providers usually request QA from freelancers.
In relation to the percentage of texts normally reviewed in the QA process, three SDH service providers state that they always review 100% of the intralingual translation, while the other two providers stated that they do not always review the entire translation. When asked about the amount of text that they normally review, one of these two providers did not know or preferred not to answer, while the other responded that it reviewed 50-75% of the translation. 11 Only three responses were gathered for the open question on how the QA process is carried out in the agency, including phases, timing, materials, etc. The fact that two providers did not respond might either suggest that the question was too broad and, therefore, difficult to answer, or that they did not know or preferred not to answer.
In line with the conclusions gathered by Rasmussen and Schjoldager (2001), the specific guidelines or protocols used by translation service providers are difficult to access, even for research purposes. Likewise, our respondents only described the process in general terms. Two SDH service providers focus on the QA process of subtitles made for pre-recorded audiovisual products before distribution. One of these companies just reviews different elements such as spelling or reading speed. The other provider explains that a third person reviews the subtitler's work following two steps: firstly, he/she checks some parameters using an automatic checker, and secondly, he/she exhaustively reviews all the subtitles together with the audiovisual material. 12 The other service provider explains the QA process for live and semi-live subtitles, which takes place after broadcasting. This answer was discarded as our proposal focuses on prerecorded subtitles.
The following two questions from the survey referred to the use of specific programmes or instruments to systematise QA and which the elements these included.
None of the five SDH service providers uses specific tools (apart from subtitling software) to systematise the QA process. The five providers confirmed subtitlers usually receive feedback after the review process.

Assessment in Spanish university SDH courses
A total of 17 questionnaires were gathered from 12 Spanish universities. To contextualise the courses on SDH, trainers were asked about what level their courses are offered at, the forms of instructional interaction and the content of courses followed by more specific questions about SDH tasks and assessment.
According to respondents, 47.1% (8)   When asked about the person assessing the different tasks, although five indicate that only the trainer assesses students, the majority involve other assessors in the evaluation process:  Teacher-, peer-, self-and expert-assessment: 1 (5.9%)  Teacher-and peer-assessment: 2 (11.8%)  Teacher-, peer-and self-assessment: 4 (23.6%)  Teacher-and self-assessment: 5 (29.5%)  Teacher-assessment: 5 (29.5%) Trainers were also asked to describe their assessment tool(s). Even those following the Spanish standards used differing criteria and ratings. Eleven (68.8%) use a set of evaluation criteria, two (12.5%) use a rubric and three (18.8%) do not make use of any instrument for evaluation. 13 The assessment tools have been mainly designed by the teachers themselves but, on some occasions, they are designed in collaboration with other teachers or they are based on the criteria of a research group or the degree board.
Of the trainers who use assessment tools, 92.3% explain them to the students before using them in assessment. However, only 58.3% (7) prepare tasks working with assessment tools as part of the training (formative assessment). Those tasks may consist of discussing standards, assessing SDHs presented in class by other students, analysing pieces of professional SDHs, or self-assessing their own tasks.
The participants' responses reveal that both university trainers and service providers attach considerable importance to assessment and quality assurance in SDH.
All service providers confirmed that QA is part of the process of SDH and is carried out by someone other than the subtitler, and that the subtitler always receives feedback.
However, none of these companies use specific instruments to measure quality in SDH, there are no agreed standards or protocols on how to measure quality and they tend to focus on the concept of translation error.
SDH trainers seem to be using innovative student-centred assessment practices involving a variety of instruments, assessors and assessment. Most trainers stated that they use specific instruments when assessing translations as products; sets of evaluation criteria are the most frequently used assessment tools compare to rubrics which are only used by two trainers. Although all the respondents stated their instruments complied with the Spanish SDH standards, the analysis of the few instruments that were shared with us revealed that there is no consensus on the scales and categories of translation errors they include.
Our study demonstrates that there is a shortage of instruments to assess SDH.
Besides, there is no consensus on the procedures used to assess quality either within academia or among service providers. SDH would benefit from sharing assessment instruments and procedures to reach more consensus. The proposal presented in the following sections aims to be a contribution in this direction.

A proposal of assessment instruments for SDH training
Despite the increase in training and research in SDH in recent years, there are virtually no studies focusing on the curricular aspects of training in this area, with a few exceptions such as Díaz Cintas (2006)  Translator trainers agree that students should be trained to produce market-ready audiovisual translations and that training should focus on evaluating the quality of the end product (i.e. the subtitles). Nevertheless, it is paradoxical that most research efforts have been related to quality of the subtitles and there is little research about how to train future SDH professionals, especially with regards to teaching methodologies and QA.
In the following sections, we propose two assessment tools (i.e. a set of evaluation criteria and a rubric). These two instruments and the tasks presented in section 5 are directly related to the general and specific competences taught on the two SDH courses at undergraduate level in Spain (see competences and tasks in section 5).
These instruments aim to help trainers assess both the students' learning processes and the products of this process throughout the course. Students can use them as a guide in translation tasks or as a 'critical framework' (Adab 2000, 220) to explain their translation strategies and decisions.
During the academic year 2015/2016 students from Universitat Jaume I and Universitat de València (Spain) were asked their opinion on using a set of evaluation criteria in the classroom. 14 Most students completely agreed that the evaluation criteria were useful in helping them become more aware of their difficulties and also as a guide to their subtitling practice. Besides, students had the impression that assessment is more objective with a set of criteria. 15

Evaluation criteria
The set of evaluation criteria we present in this section (see table 3    -This tool does not assign negative points to each type of error. However, we use it to categorise errorssubtitles are then scored using the rubric.
-In order to use the evaluation criteria as the only tool to review translations, assigning negative points to each type of error would be necessary. The number of points upon which the final score is calculated should differ according to the duration of the video, the number of subtitles and the importance of the errors. Repetitive errors would not diminish the score when exceeding a maximum of points to be decided by the trainer.
-Assessors may reward good subtitling solutions positively.

Evaluation rubric
Assessment rubrics are scoring guides in which elements to be assessed and performance levels are described with a view to evaluating students' performance.
Rubrics are useful tools for any assessment task and, when required, they may also include numerical ratings (Hurtado Albir 2015, 72). Constructing a rubric can be timeconsuming, since articulating criteria and grading descriptors is not an easy task.
However, this difficulty is compensated for by the fact that rubrics provide both learners and teachers with specific feedback about the learning process (Mertler 2001, n.d.).
Therefore, placing the rubric at the students' disposal is essential.
The rubric presented in table 4 is based on the set of evaluation criteria given above and incorporates aspects of Angelelli's (2009, 40-41), Hurtado Albir's (2015, 21) and Pavani's (2016, 423-424) work in this area. In line with the idea of 'minimal' and 'optimal' quality by Gummerus and Paro (2001, 138), this rubric includes five performance levels (i.e. excellent, good, fair, poor and unacceptable) and describes the elements to be assessed in each of them.
formulation of the source text sense; writing; adequacy of translation mode (mainly relating to the conventions of SDH regarding the elements to be subtitled); and representation of subtitle on screen have each been assigned a different percentage.
These separate percentages are used to calculate the final score. Given the fact that our students are in the last year of their degree, more importance is given to SDH-specific methodological, strategic and professional competences, i.e. 'adequacy of translation mode' and 'representation of subtitle on screen'. The sub-component 'formulation of the source text' has been given a lower weighting since the SDH taught in our course is mainly intralingual and there are therefore few problems of transfer. 21 This rubric is still at an early stage of development and was tested, together with the set of evaluation criteria presented above, in class and improved during the academic year 2016-2017 at Universitat Jaume I and Universitat de València. Due to space constraints, the description of each performance level is not presented in this article. 22

A comprehensive assessment proposal for SDH training: using assessment instruments as learning and teaching tools
The pedagogical approach to SDH training presented in this article suggests that both teachers and students actively engage with the two assessment instruments described above. In the survey of teachers, we found that these types of tools are mainly used for summative assessment. However, our experience has shown that it is also possible to use them for formative assessment and that they are useful instruments in the development of competences at different stages of the learning process. Therefore, we believe that they are not only essential tools for the teacher to conduct summative assessment, but are also effective guides for clarifying what is expected of students, helping to provide feedback and rating the students' work. Moreover, we consider that students can also actively engage with them in their own learning process.
Our proposal is part of a competence-based training framework that goes beyond the appraisal of translations and encompasses the evaluation of the learning process.
(5) Using appropriate strategies to solve intralingual and intersemiotic translation problems in different text genres (integration of competences, SC5).
The 'holistic translation assessment method' (Waddington 2000) we propose links assessment tasks to competences and performance levels, and uses different instruments and tasks to assess the learning process and the general and specific competences developed throughout such a process. Following Hurtado Albir's approach (2015), these interrelated graduated tasks prepare students for a complex final task which shows students have acquired the relevant competences and have met the learning objectives.
The tasks described below are not only subtitling tasks, but also tasks related to acquiring knowledge about and preparing students for SDH. The tasks are designed to use evaluation criteria and a rubric not only as assessment, but also as learning tools.

Task 1. Familiarising students with the Spanish standards UNE 153010:2012
Subtitulado para personas sordas y personas con discapacidad auditiva  Linked to GC1 and GC3; SC3 and SC4  Formative assessment  Self-assessment Students are asked to read the Spanish SDH standards and fill in table 5 (additional rows may be added). To assess this task, students are given an answer key which is discussed in class. Students are then presented with a set of evaluation criteria prepared by the teacher, which may be altered in line with some of the students' suggestions. Where it is the case that there is a QA instrument from the SDH industry, students would benefit from analysing this instrument before being introduced to the criteria to be used in class.

Task 3. Getting to know other standards
 Linked to GC1 and GC4; SC3 and SC4  Formative assessment  Teacher-and peer-assessment As the Spanish UNE standards are rather vague on when and how to describe some elements, such as lyrics, students are asked to read other European standards and in pairs answer the following questions:  Are there any differences in comparison with Spanish UNE standards?
 Which information is more complete regarding…?
 Would it be appropriate to elaborate European or even international standards?
What advantages or disadvantages can you think of?
 Would you add some information from the international standards to the set of evaluation criteria presented in task 2?
Answers are compared in class. The trainer together with the students would decide whether the set of evaluation criteria should be modified.

Task 4. Partial analysis of professional subtitles
 Linked to GC1, GC2 and GC3; SC1 and SC2  Formative assessment  Peer-and self-assessment Students are asked to analyse how one specific source text element (e.g. background music) has been translated in subtitles shown on TV or distributed on DVD. The material given could be a video clip or some stills. If possible, they may compare more than one set of professional subtitles for the same video clip. After the analysis, students discuss with peers or they could be given an answer key to self-evaluate their task. If it the case that students have been asked to analyse only one specific source text element, this task may be repeated during the course to analyse other elements of SDH separately (e.g. plot music, sound effects, etc.).

Task 5. Partial subtitling
 Linked to GC2 and GC5; SC1 and SC5  Formative or summative assessment  Peer-assessment Students are asked to subtitle a video clip focusing specifically on one element, for example, music. This video clip would be an original audiovisual production in Spanish or a foreign production dubbed into Spanish.
Once they have completed their own subtitling, students are then asked to evaluate a peer's subtitling in two stages. First, they evaluate the subtitles as a written text and then, they evaluate the subtitles in conjunction with the audiovisual material.
The evaluation criteria already presented in class would be used to assess quality. The subtitler and proof-reader (peer assessor) would then discuss problems and suggest solutions. This task may be used more than once during the course, focusing on a different element each time.

Task 6. Comprehensive subtitling
 Linked to GC4 and GC5; SC1, SC3 and SC5  Formative or summative assessment  Self-and teacher-assessment Students are asked to subtitle a more complex video clip containing all elements to be included in SDH to demonstrate whether the necessary translation competences have been developed and the learning objectives achieved. This final task could be used as a summative assessment, but it could also be used as a self-evaluation task, as the translation should be accompanied by a translation commentary. This commentary would describe and evaluate the processes followed while subtitling, thus demonstrating the students' awareness of their own translation processes and also their learning process. The students would use the rubric designed by the teacher to assess SDH, which they have already been introduced to, to carry out the self-evaluation, before submitting the translation for teacher assessment.

Task 7. What did I learn?
 Linked to GC4 and SC1  Formative assessment  Self-assessment Students are asked to reflect on their learning process and the use of rubrics and evaluation criteria to assess SDH quality by answering the questions listed in table 6 (based on Hurtado Albir 2015, 235-236). Table 6. What did I learn?
(1) What conclusions did you draw on the application of SDH standards in Spain?
(2) What kind of problems did you find when subtitling and using these standards?
(3) What conclusions did you draw on the use of the set of evaluation criteria and the rubric?
(4) What new strategies did you learn to try to solve the problems you encountered?
(5) What aspects did you improve? (6) What aspects do you still need to improve? (7) How do you think you can improve them?
Rate from 1 to 10. I am able to:  Identify elements to subtitle:  Apply Spanish SDH standards:  Identify problems when subtitling:  Solve basic subtitling problems:  Apply appropriate strategies to subtitle:  Assess SDH quality with the set of evaluation criteria and the rubric:

Conclusions
This research presents a set of evaluation criteria and a rubric as tools for training in subtitling for the D/deaf and the hard of hearing. Data from the industry and trainers were gathered to examine which assessment processes and instruments are used in the SDH professional sector and university training in Spain.
Our case study reveals that SDH service providers do monitor quality but have not created any specific instrument for this purpose. They consider some criteria and use subtitling programmes to automatically check technical elements when undertaking QA. Most SDH trainers in Spanish universities use assessment tools, especially sets of evaluation criteria. However, only some of them include tasks to work with these tools as part of formative assessment.
Based on the data gathered, the Spanish SDH standards and a competence-based training approach, this paper proposes two assessment instruments and a number of interrelated and graduated tasks for students to actively engage with them in their own learning process. This assessment methodology is a work in progress. During the academic year 2016/2017, assessment combined the set of evaluation criteria described in section 4.1 with the rubric presented in section 4.2. We believe that this assessment methodology is not only useful for trainers as a guide to assess SDH and to promote self-and peer-assessment, but also for students to assimilate SDH conventions and to serve as a framework to explain their subtitling strategies and decisions.
During the academic year 2017/2018 we will continue validating and improving the assessment tools and tasks (i.e. making changes to the timing, sequencing and required previous knowledge of the tasks). At the end of the course students will also be asked about the usefulness of the tasks and tools and their level of satisfaction.
Results from this research will be shared with SDH service providers and university trainers, and they will be asked about the usefulness and potential application of the tools in their professional settings.
To conclude, this research corroborates the need for more empirical studies which aim to validate similar assessment instruments used in academia and industry by trainers, students, professionals and also, as has been suggested by CNMC (2016), potential institutions in charge of quality control. In line with Huertas Barros and Vine (2015,2017,2018), we believe that academia can help industry to ensure quality and, conversely, industry can help academia to assess more effectively. If the gap is bridged, SDH quality will be improved and, as a consequence, this improved quality will also be of benefit to the end users, i.e. the D/deaf and the hard of hearing.