LEVERAGING MECHANICAL 3D CAD SYSTEMS THROUGH IMPROVED MODEL QUALITY BASED ON BEST PRACTICES AND RUBRICS

CAD data quality may be leveraged by quality-oriented CAD instructional strategies, but current mechanical CAD practice fails in enforcing quality. The concept of CAD model quality is reviewed to discover how to convey different dimensions of quality through rubrics. The paper describes lessons learned from pilot studies designed to gain knowledge on introducing best practices throughout the training period of novice product designers. Best practices are conveyed by rubrics, so leading to increased quality in CAD models. Rubrics must adapt to the specific concept practiced at each stage of the training period. In this way, they become helpful in disclosing which currently available strategies and tools to enforce quality may be early comprehended and how they are better introduced. Lessons learned from pilot studies are proposed to be reformulated as strong hypothesis that should be validated or rejected in the future, by way of experiments with suitable statistical power


INTRODUCTION
Traditional computer-aided design education in mechanical engineering (MCAD) still remains a major challenge today. Current educational practice does not provide sufficient strategic knowledge and understanding to enable students to fully use CAD systems as knowledge-intensive design and communication tools to properly develop and convey design intent [1]. CAD data quality [2][3] [4] may be improved through different strategies, such as requiring CAD users to adhere to best-practices documents (modeling guidelines) or applying software tools to test (and sometimes automatically repair) CAD models. While both strategies are perfectly recognized, they are difficult to implement due to several reasons: best-practices documents usually are developed in-house by major Original Equipment Manufacturers (OEM) and they are not made publicly available; built-in quality tools in CAD systems and external product data quality (PDQ) checkers are oriented to verify mathematical and topological quality criteria while organizational and methodological aspects usually are not covered. However, taking into account these limitations, the main drawback is that both strategies are introduced later in the training period of fresh engineers.
In our vision, early introduction of best practices is feasible and beneficial in order to increase the average quality of CAD models produced by fresh engineers while making them more sensitive to CAD quality culture. This approach does not preclude more sophisticated CAD quality enforcing strategies.
Rubrics have proved valid for disclosing some quality criteria that can be efficiently transmitted in a basic mechanical CAD course for freshmen [5]. In this paper, we conclude that rubrics are inadequate if they are generic and used in their standard way. Instead, rubrics must adapt to the specific concept practiced at each stage of the training period.
The paper begins reviewing the concept of CAD model quality to find a) how the dimensions of quality models can be detailed in suitable assertions, and b) how good practices may be hierarchically organized around main dimensions. This hierarchical organization must allow incremental evaluation (with varying levels of detail), to prevent quality from being only evaluated at the end. The paper also summarizes lessons learned from some pilot studies. Their conclusions are proposed as strong hypothesis that should be validated or rejected by way of suitable future experiments.
We foresee three main reasons why high-level quality models are difficult to produce. First, careless modeling strategies result in errors not always detected by the CAD application. Second, even best practices pursue a very modest goal of pure geometric quality. High-level best practices, such as those aimed at emphasizing design intent, are rarely reinforced in every stage nor transmitted from stage to stage. Finally, frequently there is a lack of coverage of the topic of quality in CAD training.

Modeling Strategies
CAD tools are supposed to prevent users from modeling non-valid geometry. If the input is geometrically incorrect, the application automatically detects the geometrical incompatibility and warns the users. But, the CAD application does not always detect geometry mistakes. In addition, some CAD applications include inspection tools that may help, but they are not automatically triggered. So, a careless user may deliver erroneous models without noticing potentially dangerous defects.
High level design information is oftentimes absent in communication from one stage to the next. For instance, what most CAD applications define as features should, at best, be termed form features. We here distinguish three types of features: a) design features (configurations which, in the designer's mind, have a particular purpose related to the function of the object [8]), b) manufacturing features (shapes produced by a specific manufacturing process [13]), and c) form features (geometric descriptions of shape with no implied relation to function or manufacturing method [14] and [15]). Vandenbrande and Requicha [16], identify that manufacturing technologies may change several times during the object's lifetime, but it retains the same function throughout. On this basis, objects should be designed in terms of their function, not in terms of manufacturing technology. We can extend this argument to assume that perhaps form features are at times the only option to obtain 3D models, but they hardly convey design intent. So, they should be replaced by design features whenever possible, since the latter convey highest semantic information. Certainly, some manufacturing features are built-in and available in the menus of many CAD applications (fillets, drilled holes, etc.). But pure design features are nearly absent from current CAD applications. Ribs are a remarkable exception of a typical design feature commonly available in commercial CAD applications. Hence, transmitting design intent is currently and mainly limited to a) very low level aspects that can be conveyed through simple form features, and b) use of manufacturing features to indirectly convey design intent.

Best Practices
As said above, requiring designers to adhere to style guides (modeling guidelines) promote good practices (at least for a particular sector or enterprise). However, those style guides are not always openly available (i.e. ¡Error! No se encuentra el origen de la referencia.), and are introduced only after the initial training period of fresh engineers.
CAD practice should not include CAD quality as a complementary goal to be addressed only after basic skills have been obtained. Instead, CAD quality should be the guideline of the training process from the very beginning. It is true that many good practices aimed at increasing quality of CAD models have been used to promote new teaching strategies (i.e. [18]). But, to the best of our knowledge, they have not yet been combined in a comprehensive approach that involves both students and teachers.
In this paper, we argue that this approach can be accomplished by introducing suitable rubrics that highlight and institute good practices, which prevent designers from neglecting these good practices or just following them sporadically.

Teaching Quality in CAD Models
A simple review of the most relevant and up-to-date textbooks illustrates that good modeling practices are suitably explained and emphasized. But it is also evident that many textbooks and tutorials only seem to sporadically remind the user about the need to acquire good practices. Recommendations aimed at promoting good practices are interspersed intermittently instead of being the backbone of the documents.
For instance, many handbooks and tutorials explain clearly that rounds and fillets should not be included at the beginning of model creation, but only be added at the end. It is argued that this procedure is good practice because: a) geometrical engines are quite efficient in dealing with rounds and fillets separately, b) simplified models are easy to obtain by temporarily suppressing those separate operations, and c) it reduces the workload, since rounds and fillets do not only overload the computer, but they also overload the user (because profiles that become more complex are more prone to error).
Paradoxically, most handbooks and tutorials include examples, aimed at illustrating the modeling process, which do not follow this recommendation. Sometimes the rounds and fillets are directly embedded in the profiles without explicitly explaining the purpose (perhaps in a naive attempt to simplify the explanation of the modeling process). At times, the explanation given is that this method is used by expert users to shorten the modeling time.
The implicit wrong message is that: a) the recommendations aimed at promoting good practices may be safely disregarded, and b) expert users are more concerned about reducing modeling time than they are on producing quality models.
Besides, CAD teaching rarely introduces explicit tasks aimed at checking quality. Conveying the importance of using simple testing tools already available in current CAD applications is not common in CAD textbooks and courses.

RUBRICS FOR CAD QUALITY
The scoring rubric is a double entry evaluation matrix, where every row is aimed at evaluating one aspect, and every column contains a quantifier. The scoring rubric was created since it eases the rating of complex, imprecise, or subjective tasks, as it produces consistent criteria. But what is important here is that, if the rubric is made public, it communicates expectations of quality around a task [19] . Hence, developmental rubrics are assessment tools aimed at supporting student self-reflection and self-assessment as well as communication between the assessor and the assessed.
Rubrics may be designed to explicitly enforce quality of engineering documents produced with 3D CAD applications.
The six dimensions proposed in [5] are suitable to measure the achievement of students in a CAD course. They are displayed in Table 1, where "project documentation" stands for "model", "assembly", "drawing", or a combination, depending on the particular output of the task under evaluation.
In order to measure the degree of accomplishment of the six dimensions, every assertion must be marked following the simple fivepoint Likert item shown in Table 1: Strongly disagree, Disagree, Neither agree nor disagree, Agree, Strongly agree.
In an inclusive evaluation, suitable to measure progress in the learning process, every dimension must be measured separately, and the final assessment is the sum of the five items (Likert scale).
On the contrary, in an exclusive evaluation, suitable to measure acquisition of final competencies, failure in passing one single dimension must be interpreted as a sign that some deficiencies still persist that must be corrected before continuing. Failure in passing one dimension is when its item is evaluated as "Strongly disagree" or "Disagree". This "stage-gate" assessment makes sense, as quality in CAD modeling is an incremental process, given that dimensions are sequentially dependent on each other. Hence, failing to succeed in one of them means that progress in the subsequent task makes no sense (i.e. a concise but inconsistent model is useless). As a consequence, for the excluding evaluation, questions must be answered sequentially, and failure in passing one dimension must be understood as failing in the whole evaluation.
The generic rubric depicted in Table 1 is useful to understand how rubrics may assist in introducing quality early in the CAD teaching curriculum. But one assertion per dimension is not enough. A more detailed rubric is required for the students to understand the meaning of the main dimensions. Furthermore, as suggested in [5], teaching to model does not necessarily include teaching to evaluate the model. Hence, the rubric must be suitably developed, including a set of detailed assertions for every dimension. Some ideas on how to develop the rubrics are suggested in [5].
Consequently, we have developed a set of customized rubrics and have conducted a set of experiments to gain experience on whether they are useful to convey CAD quality.

VALIDATION
The CAD rubrics have been tested through three pilot experiments. The population was a group of students of a third year course in a degree in Industrial Technologies. The subject was Engineering Graphics, and is aimed at learning to use 3D CAD applications to produce detailed designs of industrial products. Students were exposed to a previous subject named Graphic Expression, where they learned the fundamentals of descriptive geometry and standard representation of engineering drawings. They also learned to use those fundamentals to produce both hand drawn sketches and 2D CAD engineering drawings. The CAD package used in the experiment was SolidWorks®. We assume that this particular software does not greatly influence the results, since it has been reported that, with minimal instruction, users can transfer their established high level modeling strategies between packages [20]. However, further experiments are required to confirm this hypothesis.
The sample size was 33 students in all three experiments, but some subjects returned invalid rubrics. Hence, we received 33 valid rubrics in the first experiment, 27 in the second, and 28 in the third.
To validate the rubrics, we used a double blind process: at the end of each exam, students must submit a rubric self-evaluating their performance. The teacher evaluates the jobs separately. Then, differences between the self-evaluation and the teacher evaluation are calculated for every subject and assertion. Finally, average differences for each subject and each assertion are also calculated.
Every exam was evaluated by only one teacher. This method was chosen for brevity, as we simply intended to acquire preliminary conclusions and reformulate them as strong hypotheses that should be validated or rejected in the future, by way of experiments with suitable statistical power. So, we neither used a control group, nor formally checked whether the teacher evaluations coincide.

Experiment 1
For this experiment, the first task was modeling the part depicted in Figure 1 [21].  As it had been previously detected that students fail in understanding how to evaluate assertion 3.1 (i.e. whether the model may be locally modified without causing undesired behavior), they were explicitly asked to accomplish a second task consisting of the following modifications: length of vertical pipe should be increased from 36 to 50 mm; diameter of the hole of the horizontal pipe should be reduced from 15 to 10 mm. Length of central body should be decreased from 110 to 80 and its width increased from 47 to 60 mm. Finally, the radius of the brackets should be increased from 5 to 10 mm. To measure differences between self-evaluation and teacher evaluation, answers were quantified in a range [0, 1], where 0 is strongly disagree and subsequent steps in the Likert scale are increased by 0.25. Then, differences (self-evaluation minus teacher evaluation) were tabulated ( Table 2). The table visually highlights discrepancy: white cell means no discrepancy, while black cell means maximum discrepancy. Increasingly partial discrepancies are highlighted in increasingly darker gray backgrounds. Two of the rightmost columns show mean differences in two ways: one shows absolute differences, which are greater than one step in the Likert scale for 19 of up to 33 subjects. The other column shows whether the students tend to be more optimistic (positive high value) or pessimistic (negative low value) than the teacher. We note that subjects 4, 5, 6, 14, 21, 26, 31 and 32 where systematically more optimistic than the teacher, while only subject 27 was systematically more pessimistic than the teacher. The last column shows how the teacher marked the subjects. All dimensions where equally weighted. Marks are in the range [0, 10], where a 5 or upper mark means passing the exam. The only correlation clearly perceptible is that students that fail in passing the exam seem to also fail in selfevaluating.
The last rows show the average deviation between students and teacher for each assertion. Figures are in red for higher differences between students and teacher. Difference is close to two steps only for assertion 6.5. But it is greater than one step for up to 7 assertions (2.2, 3.1, 3.2, 4.1, 5.1, 6.1 and 6.4).
Some tentative conclusions derived from Table 2: 1. The absence of a reward does not seem to affect the involvement of students. Apart from a few subjects (8,29), the differences between the self-evaluation and the teacher evaluation are slightly higher than one step in the Likert scale (0.28 in average). 2. Introducing the rubric earlier than in [5] seems to have had a small beneficial effect, as correlation between self-evaluation and external evaluation is slightly better. 3. Explicitly asking the students to modify the part seems to have had no effect in helping them to understand assertion 3.1, which is still poorly understood. It must also be noted that some students had not been previously exposed to this kind of editing task. 4. Complex concepts, like those in 5.1 (design intent) or 6.5 (i.e. repetition patterns: matrices, symmetry and the like), were still scarcely understood. We must note that those conclusions are only "tentative", as the pilot test is only intended to find working hypotheses that should be rejected or validated through full scale experiments statistically valid and with high statistical power.

Experiment 2
For the second evaluation, students were given the assembly drawing of a pulley shown in Figure 2 [22]. All non-standard parts had been modeled in advance and the students were asked to use all of the modeled parts, together with the corresponding standard parts they could find in the library, to assemble the pulley (Task 1). They were asked to produce a sub-assembly of the support including parts 1, 2 and 3. Then, an assembly drawing similar to Figure 2 had to be extracted from the assembly (Task 2). Finally, they were asked to extract detailed drawings of all of the parts (Task 3).  5.5 Documents are well organized (names of files and structure of folders). 6. The project documentation is effective in conveying design intent (15%) 6.1 Assembly uses mating conditions which highlight design intent instead of geometry. 6.2 Mating conditions are labeled to convey design intent in the assembly tree. 6.3 Assembly sequence helps to convey design intent. 6.4 Assembly is subdivided into sub-assemblies that convey functionality. 6.5 Links between assembly and models are used to automate bills of materials and detail numbers. 6.6 Drawings are mainly extracted from models or assembly (i.e. contain no, or very few, manually added "cosmetic" elements).
Differences between the self-evaluation and the teacher evaluation are tabulated in Table 3. In the view of the table, some working hypotheses are derived from the experiment: 1. Students felt unable to determine whether their assemblies included all linked documents (assertion 1.1). We suppose that a theoretical exposure to criteria and tools useful to collect and organize all the CAD files related to the same design project (i.e. packing and managing files tools like "Save as" or "Pack and go") had been sufficient for them to comprehend and use the tool. It seems that explicit practice on how to use such tools for testing is mandatory. 2. In spite of explicitly being asked to produce a certain sub-assembly, the students still misunderstand assertion 5.4. 3. Dispersion between self-evaluation and teacher evaluation increased from the previous experiment (from 0.28 up to 0.38).
However, differences in assertions are less in assembling (0.22) than in drawing extraction (0.44). 4. Teachers reported they felt uncomfortable while marking. We suspect that mixing assertions related to the assembly process and drawing extraction appear to make the rubric unnecessarily unclear. 5. Results reflect that students better understand dimensions 1 to 3 than they do 4 to 6. 6. The correlation between self-evaluation and teacher evaluation is nearly null for a significant number of students, probably due to their lack of implication. It was detected that there was some unwillingness to complete new rubrics by interviewing the students after they answered the rubric, as they perceived them as an overload which provided no benefit for them.

Experiment 3
For the third evaluation, students were given detail drawings of the swiveling pulley shown in Figure 3 [21]. They were asked to perform four tasks: 1) model parts 1 and 4; 2) extract the drawing of part 4; 3) assemble the modeled parts together with the premodeled parts 2, 3 and 10, and the standard parts obtained from the library, and 4) extract the assembly drawing.
To prevent confusion due to simultaneous evaluation of different tasks, a separate rubric was used for each task. In order to get the students to utilize the rubric, its completion was mandatory, and they were informed that their marks would be increased depending on the mean difference between self-evaluation and teacher evaluation for each of the four initial tasks: their marks were to be increased linearly, from 0 for a difference of 0.5 or higher, up to 1 for a null difference.
Besides, to determine whether the students had already comprehended the generic dimensions, they were asked to mandatorily mark the main assertions, and only mark detailed assertions when they considered that this step could add some valuable extra information.  The complete lists of assertions of the rubrics can be found in Appendix 1. Table 4 shows the self-evaluation of Task 1. The information from all subjects was processed, in spite of the fact that subject 22 marked assertions 2.1 and 2.2 but did not mark dimension 2.
Depending on the dimension, a minimum of 39% and a maximum of 57% of subjects considered that detailed evaluation was still necessary. We can presume that after just two previous exposures to rubrics, roughly half of the subjects (51%) had completed the expand-contract process, i.e. they had comprehended the meaning of the main quality criteria conveyed by the six main dimensions and did no longer need detailed assertions. However, this supposition is distorted by at least two facts. The first can be detected in the table: some subjects only marked some assertions to highlight a difference relative to the average mark they had assigned to its dimension. As an example, see dimension 6 for subject 5. The second fact is subtler: perhaps some subjects marked the dimensions after reading the assertions to understand what they were asked. If so, they were doing what we intended: comprehend the meaning of the dimensions. But we cannot claim that they had already comprehended it.
Differences between the self-evaluation and the teacher evaluation for Task 1 are tabulated in Table 5. On the right side, the average difference between the subjects and the teacher is divided into three groups: those that never required expanding the rubric, those that sometimes used the expansion, and those that always used it. Since the differences between self-evaluation and teacher evaluation are nearly null for the three groups (they range from 25 to 30%), we deduce that answering only the main dimensions did not significantly affect the precision of the evaluation.
For brevity, the tables of the self-evaluation of the other three tasks are not reproduced, as they reflect the same trends. Differences between self-evaluation and teacher evaluation are summarized in Tables 5 to 8. Subjects 11 and 23 did not solve Task 4, so they did not mark it. In examining the tables, some working hypotheses derive from the experiment: 1. Modeling tasks are more accurately self-evaluated than assembling or drawing tasks. This result may indicate that the syllabus of the course is not suitably balanced. It may also indicate that assembling and extracting drawings are more complex tasks than modeling parts. Further studies are required to validate or reject those hypotheses. 2. The differences between the self-evaluation and the teacher evaluation for Task 1 are similar to Experiment 1 (0.27 in average).
Task 3 was an assembly task, so its discrepancy (0.35) should be compared with Task 1 of Experiment 2. This comparison is difficult because we only presume that the difference for Experiment 2 is probably between 0.22 and 0.38. Finally, the drawing tasks resulted in discrepancies around 0.41, which appear to be similar to the discrepancies measured in Experiment 2 (although, again, a detailed comparison is impossible). Hence, we suppose that the reward may assist in maintaining the cooperation of the students, but does not significantly affect the quality of their self-evaluation. It appears that they simply cannot perform better. 3. Approximately 50% of the students marked the detailed assertions, and it seems that most of them still do not understand the generic dimensions and feel the need for further details. Hence, the expand-contract process seems to work, but requires exposure to more than three rubrics.

LESSONS LEARNED
We next summarize lessons learned on using rubrics as a method aimed at disclosing quality criteria, which can be conveyed through good practices along the training period of novice product designers. Table 4. SELF-EVALUATION IN TASK 1 OF EXPERIMENT 3  Table 5. DIFFERENCES BETWEEN THE SELF-EVALUATION AND THE TEACHER EVALUATION   Table 6. DIFFERENCES FOR TASK 2 Table 7. DIFFERENCES FOR TASK 3  Table 8. DIFFERENCES FOR TASK 4

Specific Rubrics Cover Quality Along the Training Period
Generic rubrics are useful as frames to compare and classify particular rubrics. They are also valid as scoring forms for experienced evaluators. But particular rubrics are the only valid choice to share evaluation criteria and convey quality criteria.
Particular rubrics must contain assertions clearly linked to the task being evaluated: In a particular rubric, do not ask if the model may be re-designed. Instead, ask if one particular dimension or shape may be safely modified.
Furthermore, every task must be evaluated separately, to prevent fatigue and confusion. This procedure is particularly important if we intend to evaluate modeling, assembling and drawing tasks.

Levels of Detail
Good practices must be organized around main dimensions in a hierarchical way that allows attainment of a new set of rubrics, developed to explicitly make quality become a main goal for both the teacher and the students.
Besides, the hierarchical structure of the new set of rubrics must allow different levels of detail, which fit the varying stages in the evolution of the learning process. This process allows incremental evaluation, which prevents quality from being only evaluated at the end.
The level of detail must follow an expand-contract criterion, where the assertions are very detailed the first time they are introduced, and are later recursively abstracted to prevent fatigue and force the students to comprehend and use general quality criteria.
Another strategy to emphasize some dimensions is by way of weighting coefficients. We have used different weights for different stages and they appear to work, but we have not yet defined a clear strategy aimed at maximizing quality comprehension.

Quantitative Assertions and Metrics are Required
Assertions in rubrics cannot include qualitative measurements of quality. Assertions like "The model is good" contain implicit evaluation of the quality through "goodness." These kinds of assertions may be useful for an informal "check list" useful for an expert designer to verify whether he/she has obtained a good quality model. This procedure typically derives from style guides, but it is clearly inappropriate for novice designers. They must learn the meaning of "good," so they cannot use this abstract concept to evaluate their own performance.
Instead, they must be provided not only with clearly quantitative assertions, but also with tools and metrics to produce an accurate measurement of their own performance.
We note that our preliminary experiments show that students fail more than expected in interconnecting tasks (i.e. "You must modify the model") with assertions (i.e. "The model accepts local modifications without crashing").

Students Do Not Use Checking Tools Unless They Need Them
Conveying to the students the importance of using simple testing tools already available in current CAD applications requires the teacher to design specific tasks requiring such tools.
Students do not trigger checking tools unless they are required to answer specific questions related to those tools. Questions may be direct (i.e. "Is curvature of a spline greater than a threshold at any point?"), or it may be indirect ("Is the spline soft or sharp?"). They also need to be familiar with the tool, otherwise they dare not use it.

Students are not Qualified to Improve Rubrics
Student shouldn't be exposed to rubrics before the teacher feels comfortable with them. If the teacher cannot use the rubric in a fluent way to mark the exams, it indicates that the rubric is not yet ready for the students to self-evaluate and comprehend quality criteria. Students use rubrics only after they have been designed and tested by the teachers.
Our difference tables with averages have proved a valid tool to detect which assertions are easy to comprehend and use by students and where they find systematic difficulties. However, we have been unable to use them to shift from qualitative detection of weak points to disclose improved rubrics.

CONCLUSIONS
Current teaching of CAD skills does not generally enforce quality from the very beginning of instruction. "Best practices" are later introduced, at least in large enterprises and OEMs, but modifying bad habits costs money and time.
New approaches are required to guarantee that quality-oriented teaching of CAD skills becomes the rule, not the exception.
The CAD rubric has been successfully tested as a valid means for the students to grasp the importance of quality and become more involved in producing quality models.
In this paper, we conclude that rubrics must adapt to the specific concept practiced at each stage of the training period, in order to disclose suitable strategies that enforce quality comprehension. The paper has also described lessons learned in using rubrics as a method aimed at disclosing quality criteria, which can be conveyed through good practices along the training period of novice product designers: levels of detail are required; quantitative assertions and metrics are also required , and students need suitable training to acclimate to rubrics Those lessons learned should reformulate as strong hypothesis that should be validated or rejected in the future, by way of experiments with a suitable statistical power.
In the future, we intend to organize good practices around different quality dimensions and develop a set of rubrics that will introduce the diverse quality aspects adjusted to the varying levels of detail that are required at different phases in the training period. The models replicate the shape of the part.

2.2
The models replicate the size of the part. 3 The model is consistent. (15%) 3.1 Models allow local changes (design variations), without causing undesired changes or errors.

3.2
The profiles of any generalized sweep operations are always fully constrained and do not contain duplicated or segmented lines. 4 The model is concise. (25%) 4. 1 The profiles could not be obtained with a substantial reduction of constraints.

4.2
The models could not be obtained with a substantial reduction of operations.

5
The model is clear and understandable. (15%) 5.1 Modeling operations are labeled to convey design intent in the modeling tree. 6 The model conveys design intent. (15%) 6. 1 The geometric constraints in the profiles help to highlight the design geometry. 6.2 Models use operations that help to convey functionality of the parts.

6.3
Models use all suitable datums and do not use unnecessary datums. 6.4 Models use design features. All of the parts are correctly placed and do not have undesired interferences.

3
The assembly is consistent. (15%) 3.1 Assembly allows local changes (parts replacement), without causing undesired changes or errors in parts that should not be affected.

3.2
Assembly allows valid movements, without causing undesired movements. 4 The assembly is concise. (20%) 4. 1 The assemblies could not be assembled with a substantial reduction of mating conditions. Repetition patterns are used for equal parts. 5 The assembly is clear and understandable. (15%) 5.1 Document is well organized (names of files and structure of folders). 6 The assembly conveys design intent. (15%) 6.1 Assembly is subdivided into sub-assemblies which convey functionality. 6.2 The sequence of assembly helps to convey design intent. 6.3 Assembly uses mating conditions which highlight design intent instead of geometry. The views are suitable to display the part.

2.2
The cuts are suitable to display the part.

2.3
The dimensions are suitable to display the part. 3 The part drawing is consistent. (15%) 3. 1 The views, cuts and dimensions are linked to the model.

3.2
The views, cuts and dimensions meet the standards. 4 The part drawing is concise. (15%) 4.1 There are few or none redundant views in the drawing.

4.2
There are few or none redundant cuts in the drawing.

4.3
There are few or none redundant dimensions in the drawing. 5 The part drawing is clear and understandable. (15%) 5.1 Sheet format and scales are suitable for the project and are suitably used. 6 The part drawing conveys design intent. (15%) 6.1 The use of orientation, symmetry and the like in the views help to highlight the design intent. 6.2 The use of orientation, symmetry and the like in the cuts help to highlight the design intent. 6.3 The use of orientation, symmetry and the like in the dimensions help to highlight the design intent. The views, cuts and dimensions are suitable to display the assembly.

2.2
The drawing includes all of the detail numbers.

2.3
The bill of materials is complete. 3 The assembly drawing is consistent. (20%) 3. 1 The views, cuts and dimensions are linked to the assembly.

3.2
The views, cuts and dimensions meet the standards.

3.3
Bill of materials and detail numbers are linked to the assembly and related to each other. 4 The assembly drawing is concise. (15%) 4.1 There are few or none redundant views, cuts and dimensions in the drawing.

4.2
There is no redundant information, neither in the detail numbers nor in the bill of materials. 5 The assembly drawing is clear and understandable. (15%) 5.1 Sheet format and scale are suitable for the project and are suitably used. 6 The assembly drawing conveys design intent. (15%) 6.1 The use of orientation, symmetry and the like in the views, cuts and dimensions help to highlight the design intent. 6.2 The sequence of the detail numbers helps to convey design intent. 6.3 The information included in the bill of materials helps to convey design intent.