Uncovering information literacy’s disciplinary differences through students’ attitudes: An empirical study

This paper uses a self-assessment questionnaire (IL-HUMASS) with a wide sample of university students. The questionnaire puts forward a scale of attitudes that aim to measure ‘belief in importance’ and ‘skills self-assessment’ regarding diverse information competences. We use a group of 26 information sub-competences gathered in four categories (searching, evaluation, processing and communication-dissemination). The results show some considerable differences in these categories when statistically comparing 17 university degrees related to five branches of knowledge. It is proved that attitudes appreciably vary between branches, in reverse relation to the interdisciplinary differences we have found. An improvement regarding students’ informational attitudes will help reduce the interdisciplinary differences. The results of this study suggest the feasibility of shared training actions for some information competences in the branches of Sciences, Engineering & Architecture, and Health Sciences. The branches of Arts & Humanities and Social & Legal Sciences show considerable widespread attitudinal differences that advise against that shared training.


Introduction
The knowledge of the University of Granada (Spain) on the levels of information competence of its students is highly limited. The situation is indeed similar in most universities in Spain. This is certainly due to the low priority that institutions give to information literacy (IL) training. Nonetheless this scene is changing, and awareness, both individual and collective, of the relevance of IL is rising.
A first approach in aiming to learn about the IL levels of our students should not be a sort of traumatic process in which, suddenly, they discover and describe their evident lack of information competence. Instead, it would be preferable to undertake this process so that it may ease the discovery of the poor levels that, indeed, our students have. In the case of Spain, we may assume these poor information competence levels are due to the limited training developed in this field. Thus, we think that a first approach to the students should be subjective, trying to diagnose not their levels of knowledge, but their attitudes and selfperceptions regarding IL. This would involve the assessment of their self-perceived competences, which could be an interesting starting point for subsequent objective assessments. Learning about the students' subjective perception of their knowledge would be useful for the development of training proposals, taking into account the subjective components of learning, and the fact that the border between learning and assessment is becoming more and more blurred.
If we agree that students from different disciplines usually have different informational attitudes, we need an initial diagnostic self-assessment that could report on their attitudes towards information competences and could help us to understand what they think and how they assess the relevance of those competences. Thus, it would be possible for academic units and academic heads to undertake different training actions for the real improvement of IL levels. The IL-HUMASS (Information Literacy in HUMAnities and Social Sciences) questionnaire used in our research offers a scale for attitudes regarding the following two dimensions: 'belief in importance' and 'skills self-assessment'. The 'belief in importance' dimension measures the student's degree of awareness in relation to the relevance he or she gives to informational competences in his or her academic training. On the other hand, the 'skills selfassessment' dimension provides data on the level of selfesteem that the student has regarding the practice of a certain competence.

Aims of the study
The arrival of IL in higher education institutions has developed from generic approaches to more specific ones in which the context, and mainly the disciplinary context, is becoming increasingly prominent. But sometimes discipline-based training has been carried out without taking into account diverse student attitudes related to disciplinary differences. During recent years we have observed that disciplinary needs and specificities are beginning to become an essential part of IL training. According to this trend, there are a number of goals in this study. First, we want to assess the self-perceptions of a representative sample of students from the University of Granada, gathered in five branches of knowledge, on the informational competences related to information search, evaluation, management and communication.
The second aim is to ascertain if students' attitudes towards the aforementioned four competence categories differ significantly in relation to their areas of knowledge. In the event of proving the existence of differences between areas, we would want to know which areas, competence categories and attitudinal dimensions they refer to. We think this information could be highly useful for the design of student-centred training programmes that could be applied in diverse areas. Finally, we will attempt to determine if the interdisciplinary differences in the students' attitudes regarding information competences are related to specific criteria.

Literature review
In the field of IL, the literature on both self-assessment and, to a lesser extent, disciplinary differences plays an increasingly important role. But there are only few studies that explicitly deal with the information self-perceptions of students in diverse disciplines and their self-assessments. From a closely related view, we find some interesting studies: on the relation between disciplinary training and competence development in order to use information resources (Nicholas et al., 2010); on the disciplinary differences regarding the relationship between information resources use and theoretical perspectives (Hjørland, 2002); or how the relevance criteria for information search and assessment could vary according to the discipline, due to the different ways of thinking (Talja and Maula, 2003). But these approaches delve into IL objective parameters. We have not found studies that deal with an interdisciplinary view of students' information attitudes, which are subjective. In spite of this gap in the existing literature, a review of the studies that are related to the main concepts of this research is provided.
IL has been an area of constant research in the last decades (Rader, 2002). Indeed, institutions of higher education are aware of the essential need to produce graduates 'with the knowledge, skills and abilities needed to live and work in the information age' (Oakleaf, 2008: 233). We agree with Freeman and Lynd-Balta (2010: 114) when they state that: it is imperative that faculty of all disciplines introduce students to effective strategies to filter and analyze information and then provide them with increasingly complex tasks that are discipline-relevant to cultivate critical thinkers and the skill set necessary for lifelong learning.
But 'information literacy consists of a broader array of competencies than our instructional practices and competency standards would suggest' (Ward, 2006: 396). According to Maybee (2006: 79): 'designing information literacy instruction without incorporating the student perspective leads to un inappropriate pedagogic strategy'. Most of the studies reviewed put forward the relevance of a diagnostic assessment, which is subjective, of information competences (Resnis et al., 2010). In recent years, this method of subjective assessment has increased as an initial and/or complementary tool for objective assessment processes. As Seamans (2002: 112) states, perceptions of 'first-year students are the focus of much library instruction at colleges and universities'. In this sense, we may also mention Green and Macauley (2007: 318), who aimed to investigate student's 'realms of engagement with information'.
A diagnosis of the perceptions of students regarding their own IL and its competences can be achieved through the application of self-assessment tests, and their respective 'self-report measures' (Oakleaf and Kaske, 2009;Pinto et al., 2012). There is a large number of works that make use of self-assessment as a diagnostic method that provides information about students' training perceptions and needs (Colthart et al., 2008;Green and Macauley, 2007;Gross and Latham, 2007;Korobili et al., 2009;Pinto 2010Pinto , 2011. This is sometimes used as the main method (Walsh, 2009).
In the context of this research, Pinto (2012) has carried out a self-assessment of Spanish history students. The ACEJMC (Accrediting Council on Education in Journalism and Mass Communications) survey measures the perceptions of journalism students (Singh, 2005). IL has been assessed within Biology studies by a number of tools at Macquarie University (Vickery and Cooper, 2002). In this context of self-assessment, 'health literacy' stands out (Elder et al., 2012) where there is preference to analyse the 'attitudes of students in the healthcare professions towards computers and e-learning' (Wilkinson et al., 2009: 755), and particularly the tendency towards the 'fair access to informatics and technology-rich clinical settings' (Fetter, 2009: 86;). Fetter uses the TIGER (Technology Informatics Guiding Educational Reform) tool to self-assess a set of information technology competences among nursing students. Other studies focus on 'the effectiveness of selfassessment on the identification of learner needs, learner activity, and impact on clinical practice' (Colthart et al., 2008: 124). CAUL ISS (Council of Australian University Librarians Information Skills Surveys) is a standardized, 20-item self-report inventory of information literacy skills of higher education, that was applied to medical students (Clark and Catts, 2007). Likewise, the Research Readiness Self-Assessment (RRSA) was used to measure the health information competencies of university students (Ivanitskaya et al., 2006).
Above all, 'knowledge about the internal or subjective side of these students' information literacy is scarce. This personal facet includes their academic behavior, feelings and attitudes' (Pinto 2011: 145). As Scales and Lindsay (2005: 519) put forward, 'attitudes toward information literacy are complex and varied but are measurable and could perhaps be used to further the development of information literacy pedagogy'.
Among these attitudes, there are two we are particularly interested in: 'belief in importance' and 'skills self-assessment'. Some experts, such as Weiler (2004), relate 'belief in importance' to 'critical thinking' in the context of the 'learning theory'. On the other hand, 'skills self-assessment' is closely related to 'self-efficacy', a concept suggested by Bandura (1986: 123) in the sense of 'self-percepts of efficacy', as 'people's judgments of their capabilities additionally influence their thought patterns and emotional reactions during anticipatory and actual transactions with the environment'. Self-efficacy affects choice of tasks, effort, persistence, and achievement (Usluel, 2007).
Nevertheless, self-assessment initiatives do not usually occur in an isolated manner, and there are many instances in which self-assessment is combined and/or compared with an objective assessment (Bandyopadhyay, 2012;Patterson, 2009). It is a matter of knowing 'how students' self-assessment of their ability compares to their actual skill as demonstrated through testing' (Gross and Latham, 2012: 576).
This combination of objective and subjective tools provides a look at the 'association between scores on an IL skills test and students' estimates of their IL skills' (Gross and Latham, 2012: 578). Alternatively, self-assessment methods have also been applied to psychology studies to diagnose information competences, along with expert assessment, as Thaxton's (2002) work shows. There are also mixed methods that triangulate data gathered from in-class task assignments with questions relating to students' process of solving information-related problems, and from semistructured interviews with students (Julien and Barker, 2009). A work on multiple information-related students' perceptions through questionnaires, tests, focus groups, and tasks was published recently by McKinney et al. (2011).
Other interesting avenues for assessing information competences from perspectives which are more closely addressed to particular tasks are also becoming available, such as authentic evaluation (Brown and Kingsley-Wilson, 2010;Diller and Phelps, 2008) or the use of portfolios (Fourie and van Niekerk, 2001).
These last trends of mixed (subjective and objective) methods, such as authentic evaluation (including the use of rubrics and/or portfolios) are related to phenomenography, a research school that 'provides researchers with a means of constructing rich, multifaceted representations of the variation regarding phenomena (Boon et al., 2007: 210). In this context, 'a phenomenographic conceptual framework investigates learning from the perspective of the learner, with the aim of reflecting on the features that this approach shares with information literacy education' (Andretta, 2007: 152). This methodology is contributing to the improvement of academics' conceptions of, and pedagogy for, IL.
The fact is that 'information literacy as a discrete phenomenon is still perceived as being a relative newcomer to many disciplines' (Boon et al., 2007: 224). However, 'despite the increasing emphasis on collaboration between the library and the discipline-based faculty in teaching IL, the skills emphasized in the IL literature are, in fact, generic' (Grafstein, 2002: 198). Indeed, 'an assessment to determine the IL skills level of a specific student body is crucial to developing a comprehensive approach to IL instruction' (Anderson and May, 2010: 499). The discipline involved is essential in IL literature from the phenomenographic viewpoint: 'the concept of IL is one that contextualizes it within the structures and modes of thought of particular disciplines' (Grafstein, 2002: 202).

Methodology
We have used a quantitative methodology, based on the dissemination of the IL-HUMASS online questionnaire and the statistical and inferential processing of data.

Data collection
The questionnaire. The questionnaire design is based on a wide corpus of literature on IL, regarding rules of a general nature (ALA/ACRL, 2000; Bloom et al., 1956;Bruce, 1997;Corrall, 2007;SCONUL, 1999;Webber and Johnson, 2006), as well as specific aspects of empirical usercentred research (Limberg, 2005;Maybee, 2006;Tuominen et al., 2005). Nonetheless, the initial design of the questionnaire was related to its priority use in Spanish and Portuguese universities (Pinto, 2010). For this reason, we have not taken into account all the dimensions of the several questionnaires designed in other settings.
The goal of IL-HUMASS is to provide a self-assessment of information competences in the context of higher education, gathering students' opinion, to get to know which competences are useful in the teaching-learning process, with the aim of including in the curricula appropriate contents that may contribute to strategic training based on competences. This questionnaire collects data, through 26 questions, on four interrelated competences: searching, evaluation, processing and communication-dissemination of information (Pinto, 2011). Each question has to be answered from three dimensions: belief in importance; skills self-assessment (both using a Likert scale of nine points) and preferred habit of learning (Pinto 2012).
The underlying competences, or categories, are the following: searching: 1. use of printed sources of information; 2. enter and use automated catalogues; 3. consult and use electronic sources of printed information; 4. use electronic sources of secondary information; 5. know the terminology of your subject; 6. search and retrieval of Internet information; 7. use informal electronic sources of information; 8. know information search strategies; evaluation: 9. assess the quality of information resources; 10. recognize the author's ideas within the text; 11. know the typology of scientific information sources; 12. determine whether an information resource is updated; 13. know the most relevant authors and institutions within your subject area; processing: Reliability and validity. The basic properties of any measurement tool are reliability and validity. The reliability, or consistency, measures the extent to which an instrument produces the same results on repeated attempts. One of the more extended formulae for internal consistency is Cronbach's alpha coefficient (Cronbach, 1951). If the individual items are highly correlated with each other, one can be highly confident in the reliability of the entire scale. In this case the weighted average of the correlations between the items (Cronbach's alpha coefficient) is 0'948, and therefore the questionnaire is reliable. That is, 'answers in the survey are most likely to differ because respondents have real differences of opinions, not because the survey is confusing or has multiple potential interpretations' (Garde et al., 2005: 12).
On the other hand, a tool is valid if it is possible to confirm that it measures what it purports to measure. The IL-HUMASS questionnaire fulfills the two validity criteria that are generally considered as more important: 'content' (does each question test the property that the designers intended?) and 'construct validity' (does the whole test measure the 'idea', i.e. 'information literacy', that it was intending to measure?) were tested (Pinto, 2010).
The sample. The overall universe of students at the University of Granada, in the 17 degrees selected for this study, amounts to 20,582. The sampling process was a probabilistic stratified method for each of the degrees selected for this study. A sampling error of ±5%, with a level of confidence of 95%, was assigned. The estimate of the sample size (sample random) was developed using the statistical programme StatsTM 2.0, which offered the preliminary result of n=1036 sample units. As a preventive measure, we increased our sample size up to 1530 completed questionnaires. Of these, 110 were discarded because they were incomplete. Thus the sample was n=1420. The survey was distributed among students from January to May 2010 in the computer laboratories, ensuring that they were representative of the five branches of knowledge: Arts and Humanities (424, out of 3998 students), including degrees in English Studies, Spanish Studies, History, and Translation and Interpreting; Social Sciences and Law (537, out of 6028 students), including degrees in Information Studies, Law, Education and Psychology; Sciences (108, out of 2632 students), including degrees in Biology, Environmental Sciences and Mathematics; Health Sciences (109, out of 2126 students), including degrees in Medicine and Dentistry; Architecture and Engineering (242, out of 5798 students), including degrees in Technical Architecture, Civil Engineering, Computer Engineering and Chemical Engineering. In the overall sample of 1420 students, 914 were women and 506 were men. The gathering of data addressed all academic years, with more emphasis on first, third and fifth years (the curricula of the degrees analysed in this research last five academic years).

Data analysis
With the selected data, diverse statistical analyses were carried out, both descriptive and analytical, using the SPSS 20.0 program.

Criteria of analysis.
The IL-HUMASS survey aims to analyse the subjective data provided by the student in each of its 26 variables, or information competences, from a triple perspective directly linked to the established three dimensions: (1) students' belief in the importance of the informational competence; (2) skills self-assessment in the exercise of that competence; and (3) preferred habit of learning. For the purposes of this paper, we have not taken into account this third dimension, due to the fact that its qualitative nature demands another kind of analysis, that we will develop in future studies.
The 26 questionnaire variables are gathered in four categories: searching, evaluation, processing and communication-dissemination of information. The analysis was carried out with these categories in all the discipline areas of our sample. But the sample does not meet the necessary conditions of statistical normality and homogeneity in order to develop a parametric analysis (oneway ANOVA and post-hoc test) that could help us compare the behaviour of the four competence categories in the five branches of knowledge. Due to the inability to undertake a parametric analysis, we have turned to nonparametric procedures: the Kruskal-Wallis test which is the nonparametric test equivalent to the one-way ANOVA, and an extension of the Mann-Whitney U test to allow the comparison of more than two independent groups (Lund and Lund, 2013).

Findings and discussion
The average values offered by the analysis in the scale of attitudes are, for all variables, clearly higher in the dimension 'belief in importance'. The overall average score in this attitudinal scale (7.15) stands out over the overall average score in the dimension of 'skills self-assessment' (5.88). A significant difference is noted between the two scores, suggesting that it is possible to undertake improvement initiatives, because it is clear that students consider that information competences are relevant. Also, we observe, in both dimensions, high scores in the categories of searching and evaluation, in comparison with the categories of processing and communication (Figures 1 and 2). It could be said that the surveyed students consider searching and evaluation in a similar way, with close scores and, from a positive perspective, with higher scores over the overall average. This similar consideration is repeated when we observe the categories of processing and communication-dissemination of information. But in this case, they are considered from a less positive perspective, with scores below the overall average.
We have not found IL diagnostic interdisciplinary studies with variables which could be compared with those of the present paper, because studies such as Head (2008) or Head and Eisenberg (2010) focused on the procedural aspects of information-seeking strategies, the research process and the research difficulties of college students, or in analysing limited academic engagement and subsequent learning outcomes, such as the study by Arum et al. (2012). Others, such as Comas et al. (2011) deal with random samples of university students regarding information seeking for academic purposes.
In our study, if we compare the average values diagnosed in the different branches of knowledge, it may be observed that there is a slight superiority of the values in the four competence categories, and a higher degree of concentration of all of them, in the branches of Arts & Humanities and Social & Legal Sciences. In the branches of Sciences and Engineering & Architecture the values are slightly lower and more dispersed. Nonetheless, the highest dispersion of average values is seen in the branch of Health Sciences, with the lowest results in the competences of processing and communication-dissemination (Figures 1 and 2). At this point it seems relevant to point out that, of the 17 degrees analysed, only two of them (Translation and Interpreting, and Information Studies) have a subject on information competences included in the curriculum. Also, there is some focus on clinical documentation, in Health Studies, lectured from the same area as Health Studies.
We have also used the non-parametric tests of Kruskal-Wallis and Mann-Whitney, clustering the results of the analysis in four groups, which correspond to the four competence categories of the questionnaire. For each of them, all variables haven been taken into account, and the results have been matched with all the possible pairs of branches of knowledge, in order to check if there are statistically significant differences. That allowed us to put forward some area groupings for the future design and development of training programmes in information competences.
A first global approach allows us to check that there are statistically significant differences between the attitudes in the five branches of knowledge, regarding the four competence categories and the two dimensions of the scale (Table 1). From this overall perspective, which considers the five branches of knowledge as the grouping  variable, we see that only the category of searching, regarding the dimension 'belief in importance', does not put forward significant differences between branches. This could be considered as positive, because it reveals that in spite of the infoxication that surrounds students in the digital age, their perception of the importance of the searching information competence is clear.
However, this first overall result does not show where the attitudinal differences between areas lie. For that, we should apply a non-parametric analysis of independent samples (Mann Whitney test), matching one by one all the branches of knowledge with the eight variables of the category searching, and in both attitudinal dimensions ( Table  2). The bold-type scores show a significant difference of this variable regarding the two branches that are being compared, that is, the paired areas.
It is seen that there are no statistically significant differences in any of the variables, in both attitudinal dimensions, when matching the perceptions of the students from the area of Sciences with those from the areas of Health Sciences and Engineering & Architecture. Likewise, we observe that there are four competences that do not show differences between the different areas. These variables refer to the 'belief in importance' regarding entering and using automated catalogues (i2), and also consulting and using electronic sources of primary information (i3) and secondary information (i4), and the 'skills self-assessment' in the use of informal electronic sources of information (n7).
The attitudes of 'belief in importance' of knowing information search strategies (i8), and 'skills self-assessment' on entering and using automated catalogues (n2), consulting and using electronic sources of primary information (n3), and using electronic sources of secondary information (n4) hardly show significant differences. Regarding this competence category, the greatest statistical differences are found between the areas of Social & Legal Sciences, and Engineering & Architecture, because the number of competences which show differences is higher.
We have also applied the same test to the variables of the category information processing (Table 4). It turns out that significant differences increase in comparison with the competence categories of searching and assessment. However, the areas of Sciences and Engineering & Architecture do not show differences in any of the competences of this category. On the contrary, the differences involve all the variables when matching the areas of Arts & Humanities and Health Sciences.
The category communication-dissemination puts forward similar results to those of the category of processing, because the areas of Sciences and Engineering & Architecture hardly show significant differences between variables, except for the variable 'belief in importance' of writing a document (i-22) ( Table 5).
All in all, we observe how the significant differences between branches of knowledge regarding information competences vary depending on the category. The categories of searching and evaluation show a similar number of statistically significant differences. Something similar is observed regarding the categories of processing and communication-dissemination, because the number of variables with statistically significant differences between areas presents almost identical results. Therefore, the increase of the percentage of statistically significant differences regarding the categories of processing and communication-dissemination coincides with the decrease of the attitudinal scores in these categories. We could speak about a reverse relationship between average values of self-assessment on competencies (Figures 1, 2) and number of significant differences between paired branches (Tables 2, 3, 4, 5), depending on the pair of information categories. The highest average values of self-assessment on competencies are observed in the pair of searching and assessment  ( Figures 1, 2), and it is precisely these two categories which show the lowest number of significant differences between paired branches (Tables 2, 3). On the other hand, the smallest average values can be seen in the pair processing and communication-dissemination (Figures 1, 2), and these categories are precisely those which show the highest number of significant differences between paired branches (Tables 4-5).

Conclusions and recommendations
The average values assigned by the surveyed students to the attitudes regarding information competences, in the dimension 'belief in importance', are outstanding in all areas, which suggests the students' degree of awareness of the relevance of information competencies as a key element in the learning process is high. These values are considerably higher than those regarding the dimension 'skills selfassessment', which confirms that the self-esteem of those same students in relation to the knowledge and practice of these competences does not match up with the relevance they assign to them, from a sound self-critical exercise. In this attitudinal context, there is a clear and significant difference between the scores related to both dimensions. The ideal situation would be one where that difference could be minimal. Thus, we consider the priority need is to improve students' informational self-esteem by means of the development of adequate training actions. The attitude 'belief in importance' can be strengthened through promotional training actions that alert the students to their need and relevance. The attitude 'skills self-assessment' could be improved by means of training actions that stress knowledge acquisition and also know-how. Such actions, according to the results of this study, should take into account the characteristics of the different branches of knowledge and the nature of information categories.
With regard to attitudes to information literacy categories, the average values expressed by the students show two levels: an outstanding attitude regarding searching and evaluation, and a passable attitude in relation to processing and communication-dissemination. Taking this aspect into account, we consider that any educational proposal focused on searching and evaluation could be dealt with from an advanced level, taking advantage of the students' supporting attitude. On the other hand, processing and communication-dissemination of information should be preferably tackled from a basic training level, to try to improve the students' motivation (Table 6).
To think about training proposals taking into account the students' viewpoint, we have considered the combination of the average values of competencies in both dimensions, 'belief in importance' and 'skills self-assessment' (Figures  1, 2), and the significant differences between paired areas (Tables 1, 2, 3, 4). We have related the dimension 'belief in importance' with the theoretical formative action for the students (know-what, awareness), and the dimension 'skills self-assessment' with practical formative action (knowhow, training). Doing so, and combining the average values of competencies regarding these two dimensions and the significant differences between paired branches, we have been able to distinguish three sub-groups of knowledge branches. In the first one, with slightly higher values and a higher degree of data concentration, we locate the students  from Arts & Humanities and Social & Legal Sciences. This first sub-group shows the highest statistically relevant differences, not only comparing these areas but also comparing each one of them with the rest (Tables 2, 3, 4, 5). If we take these differences into account, the formative actions should be specific and independent for each one of these two areas (Arts & Humanities and Social & Legal Sciences). On another hand, as both areas show the highest average values of self-assessment on competencies, this specific action could be an advanced level formative action, considering the students' high score.
In the second sub-group, with intermediate values of competencies, there are the students from Sciences and Enginering & Architecture. The students from the branch of Health Sciences, with lower scores and a higher data dispersion between categories, are distinct from the rest. Overall, the areas of Sciences, Health Sciences and Engineering & Architecture show lower average scores and it is possible for these areas to share formative actions. These educational initiatives could be of an advanced or a basic level, depending on the information category. As stated above, regarding formative actions we have distinguished between theoretical actions, related to 'awareness', or students' declarative knowledge, and procedural actions, related to 'training' or practical knowledge (see Table 6). For the category of searching, advanced level formative actions could be shared by the areas of Sciences, Health Sciences and Engineering & Architecture, regarding the students' theoretical or declarative knowledge, that is, the 'awareness' related to the 'belief in importance'. Also regarding the category of searching, the areas of Sciences and Health Sciences could share formative actions as well, but on a procedural or 'training' level, related to the students' 'skills self-assessment'.
On the other hand, the areas of Sciences and Engineering & Architecture could share basic level formative actions, both regarding awareness and training, for the categories of processing and communication-dissemination of information (Table 6).
The branch of Health Sciences is most in need of improvement in students' attitudes towards IL in general terms, and above all in relation to processing and communication-dissemination of information (Figures 1 and 2). Any training initiative on these two categories is advisable for this area.
In any case, this study confirms that the IL paradigm is discipline-dependent, and it should be made distinctive and specific for almost all disciplines. When it comes to students' IL, there are plenty of attitudinal differences between disciplines, thus, specific training IL actions should be fostered. Probably, this would contribute to reducing the attitudinal differences between students when we compare diverse academic disciplines.
All in all, it would be desirable to be able to provide students with sound IL training, and foster a perception similar to the one diagnosed by Scales and Blakesley (2005: 521): A majority of the students tied information literacy to human development or to the desire to learn or understand something, either for the sake of learning or to fulfill a need. A minority saw it as related only to libraries and specific class projects.
IL is embedded in the activities of particular groups and communities of practice; that is, information competences evolve in disciplinary contexts, and they are practised by communities according to their dynamic needs. Certainly, it is also a fruitful area for transdisciplinary actions (Shenton and Hay-Gibson, 2011). It is indeed an area of training that needs to be strengthened in all disciplines, deeply and specifically. A diagnosis of students' attitudes towards information competences may be a sound starting point to take action.