1
Introduction 2
Motivation for Creating TAL 3
Why do we use questions? 3
An Automated Test Generator 5
An Approach to Classification 7
Practical classification problems 8
A General Classification Scheme 12
The Mathematics Subject Scheme 12
How to specify a question slot 14
Conclusions 15
Appendix A 15
Appendix B 16
References 17
Maths-CAA Series
May 2003
TAL -ANationalDatabaseofQuestions -
Classification isthe Key
Contents
2
3
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
FDTL awards the idea of shared assessment resources in this form was not
supported perhaps because the case was not made well enough.
At present TALis used by the University of Bristol and Liverpool John Moore’s
University. As part ofa European Leonardo project, ATHENA, the following
Universities: Università degli Studi di Perugia, CPE Lyon, University of Szeged,
Universidad Politécnica de Valencia all used TAL with their students. The
project set a mathematics test at university entrance level and translated the
test into the different languages. The students then all took the same test and
the results were compared. Any other university interested in taking part in this
project is welcome as are schools, which may find questions based on the SEFI
Core Zero syllabus of interest.
2. Motivation for Creating TAL
In the 1970s Bristol University staff used to mark students work every week.
The pressure of work eventually led to the abandonment of this practice and
there was a consequent fall in the mathematical competence of students. The
idea of occasional progress tests was formulated to address this fall in standards
but clearly if we could get a machine to mark students work every week then
all would be well. We conceived ofa huge databaseofquestions with students
set tests that were randomly generated from the database. Questions are ex-
pensive to generate and need continual updating. Many people however are
teaching roughly the same material and so the same questions could be widely
used. The concept ofaNationalDatabaseofquestions was born. This concept
is much wider than mathematics and so theTAL project includes questions
from many different subject areas, although the largest contributors of ques-
tions are mathematics, chemistry and physiology. As TAL’s home is in Engi-
neering we are also generating adatabaseof Engineering questions stemming
from a project, based in a new degree at Bristol called Engineering Design
(www.fen.bris.ac.uk/engdesign). The Engineering database aims to help stu-
dents understand engineering concepts; and is particularly aimed at questions
that do not require detailed calculation but require a feel for the subject.
2.1 Why do we use questions?
The main goal of education is that students should learn. To learn mathematics
they need to try their hand at problems. We want students to understand the
concepts of mathematics but such concepts are often quite complex. When
students can understand how to do a question involving a concept, the under-
standing may be quite one-dimensional, i.e. they can use the concept only in
that sort of question. In order to help students to have a richer understanding of
TAL -ANationalDatabaseofQuestions -
Classification isthe Key
Jon Sims Williams and Mike Barry
Dept Engineering Maths, Bristol University
Email: jon.sims.williams@bristol.ac.uk, mike.barry@bris.ac.uk
Abstract: This paper discusses a CAA system called TAL. TALis unusual in that
it allows users to generate large numbers of equivalent tests from a specifica-
tion. The tests are generated from adatabaseofquestions and all questions
must be classified. Some ofthe difficulties involved in classifying questions
are discussed.
1. Introduction
The Test and Learn system, TALisadatabaseofquestions with facilities for
staff to set tests and students to take tests. It was first conceived in 1994. We
built a system, which allowed students to run a programme on their PC linked
to a remote computer. The system collected a test from the remote computer,
ran the test locally on the PC and then sent the results back to the host compu-
ter. The system used the basic systems of computer-to-computer direct links
that were available at that time and worked very reliably. A bid to JISC was
successful (Higher Education Funding Committee JISC Technology Applica-
tions Project 2/352) in 1996. With this support we built a web-based version
that was first used in 1997. The task of populating thedatabase with a large
enough set ofquestions was perceived to be too onerous a task for one univer-
sity or department and so the system was built to allow several universities to
use the same set ofquestions in the hope that participating staff would gradu-
ally add to thedatabase and we would all benefit. This is not, of course, the
first time that the idea of sharing questions has been mooted and early in the
TAL project we visited Glyn James and Nigel Steele at Coventry University
and were given the paper results ofa previous attempt to share questions.
Martin Greenhow ofthe University of Brunel has kindly offered his set of ques-
tions, Mathletics to be included in the database. The normal history of shared
sets of assessment questionsis that several people are very keen and offer
questions at the outset; others are very keen to accept the sets ofquestions and
then the whole project dies. We hope that by having thedatabaseof questions
open for all to use remotely over the Internet, this project may yet succeed;
however it really needs full support and maybe central funding by the LTSN
subject centres. If the LTSN Generic Centre accepted the concept ofa generic
database then that would be even better as we already support questions from
three subject areas, mathematics, chemistry and physiology. In the recent
5
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
4
Jon Sims Williams and Mike Barry
that all the tests are similar.
3. An Automated Test Generator
We first conceived ofthe automatic test generator just picking questions ran-
domly and generating tests with about the same length and difficulty. So one
would choose a topic: “Differentiation” and then all the tests would be gener-
ated from questions with this classification. This was relatively simple to do
but pedagogically it did not work – teachers need to craft their tests so that the
questions are more precisely chosen. The problem of random choice showed
up when some tests had two or more questions that were similar, while others
had quite different questions.
Automatic test setting is wonderful for the teacher in that several tests are set
very quickly, but more control is needed in the formation of tests. The current
system allows the user to set a test by specifying the topic or topics to be used
for each question in the test. The concept is illustrated by the short test below:
Example test Design
Test Name: test1 Course Unit: EMAT1011
Setter: Jon Sims Williams Dates available: 3/5/02-15/5/02
Length of test: 11:34 +/- 2 minutes Facility of test: 60 % +/- 5%
Figure 1: A simple specification ofa mathematics test.
Question No. Main Topic SubTopic1 SubTopic2
1
functions
understanding form of
graph
2
functions domain and/or range
simple algebraic
functions
3
differentiation practical application speed
4
differentiation
function of function
rule
trigonometric functions
concepts, we expose them to a wide variety of questions.
The goal is clear. We know that students find it helpful if they can measure
themselves up against a standard to see if they are doing well, and pedagogically
we want to expose students to a rich variety of problems to reinforce their
understanding. To reach this goal we need to motivate the students to do the
questions and we need to provide the questions.
There are two ways of providing questions:
1. Generate questions parametrically where thequestions all have the same
structure but have different numbers.
2. Provide a large number of different sorts ofquestions on the same topic.
Method 1 has two advantages:
• Students gain confidence because they can practice until they succeed on
a familiar type of question
• It is much cheaper to generate questions parametrically
Method 2 gives students lots of practice but as each question is different they
do not learn how to answer questions by drill or formula but rather gain a more
multi-dimensional understanding ofthe topic. The disadvantages of this ap-
proach are the converse of using method 1. Most importantly, students may get
discouraged if thequestions are all so different that they cannot build on the
understanding they get from successfully completing one question in the next
question they attempt. Ideally thequestions lead on one from another increas-
ing the scope of understanding without being too challenging. A good teacher
can do this by hand selecting questions from the database. Automatic support
is more difficult; it could be done but at present we only collect and use data
on the relative difficulty of questions.
We use questions for motivation. However just providing questionsis not enough.
Many students seem to need the motivation of getting some marks before they
will put the effort into trying questions. As soon as you give marks, students
complain if they think that other students are in some way cheating – yet to be
most effective in supporting learning we want to offer access to the tests from
students’ homes and halls of residence. Probably the most effective resolution
of this contradiction is to set a suite of tests all of which are on the same set of
topics and then tell students that one of these tests will be used for a supervised
test and the other are available for them to use for practice. One of TAL’s
special strengths is that it allows you to set tests according to a specification so
7
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
6
Jon Sims Williams and Mike Barry
4. An Approach to Classification
The design ofaclassification scheme is primarily influenced by the needs of
users. There are various publicly available classifications of mathematics [2 -
6] but none of them has as its prime aim the need to classify questions for
teachers to use.
The users are setting a test from a large databaseof questions, so they must be
able to find questions that suit their needs. Teachers need to be able to find
questions on the subject that they are teaching and at the same time they need
to know if thequestions will require any other expertise than just the specified
subject. So if we take some examples:
1. The basic skill of differentiating a polynomial.
2. The use of differentiation ofa polynomial to find its maximum.
3. Given a simple model ofthe position ofa ball expressed as a polynomial,
find out where the ball is travelling fastest.
4. Invent a simple model for a sledge sliding down a hill with a linear
resistance to motion. Find the point where the sledge travels fastest.
Now all these problems involve differentiation ofa polynomial, so they could
all be classified under the same subject; however thequestions require increas-
ing levels of modelling skills. In theclassification scheme ofthe previous
section, these questions were classified as follows:
1. Mathematics/Calculus & Analysis/calculus/differentiation/polynomials
2. Mathematics/Calculus & Analysis/calculus/differentiation/max;min;
stationary points/polynomials
3. Mathematics/Calculus & Analysis/calculus/differentiation/max;min;
stationary points/Model Application/polynomials
In terms like “max;min; stationary points”, the semi-colon is used to mean
“and/or”.
If the user specified that he wanted to choose questions on …/differentiation
then all these questions would have been selected. The user could however
have excluded questions requiring the higher skills by excluding, for example,
questions classified as:
Mathematics/Calculus & Analysis/calculus/differentiation/max;min; sta-
tionary points.
This test has only four questions and realistically one should never set such
short tests as their reliability is low; however here we have chosen to set two
questions on functions and two on differentiation. Rather than just choose the
questions on differentiation randomly we have used two additional classifica-
tions to be more precise; so the first question is using differentiation to find a
speed in a practical application. The second question specifies that the differ-
entiation requires the understanding of how to differentiate functions of func-
tions and the functions concerned are trigonometric functions. So the setter of
the test has a clear idea about the sort ofquestions s/he is setting. Lastly two
additional requirements are that the test should take 11.34 ± 2minutes and the
facility ofthe test should be 60% ± 5%. The facility isthe percentage of
students who normally can get a particular question right. This is averaged
over all the questions, and each lecturer can have a different set of facilities
and time to do for each class taught. The test generator searches the database
for suitable questions and generates as many tests as it can to satisfy the speci-
fication.
We like to set about 20 different tests satisfying the specification and in order
to do this it may be necessary to relax the specification a little and allow a
wider range ofquestions to be used for each question slot. The specification
can be relaxed by saying that the type of function used in the function of
function differentiation does not matter – just don’t specify it. Similarly we
could allow any sort of application of differentiation not just questions based
on speed. Other relaxations could be to specify another sort of question that
could be used for each question slot e.g. questions on differentiation using the
chain rule could be used as well as questions on the function ofa function rule.
Figure 1 is illustrative ofthe idea of how tests are specified but MainTopic,
SubTopic1 and SubTopic2 have no particular meaning as will be shown in later
examples. The whole basis of this test generator is that all thequestions are
classified and theclassificationofquestions uses a tree structure. The ques-
tions slots in the example above as specified as:
1. Mathematics/Calculus & Analysis/functions/Understanding form of graph
2. Mathematics/Calculus & Analysis/functions/domain;range/simple
algebraic functions
3. Mathematics/Calculus & Analysis/calculus/differentiation/practical
application/ speed;acceleration
4. Mathematics/Calculus & Analysis/calculus/differentiation/ function of
function rule/ trigonometric functions
The ‘Mathematics’ is not redundant as thedatabase contains questions on other
subjects. A semicolon is interpreted as ‘and/or’.
9
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
8
Jon Sims Williams and Mike Barry
tions. If a teacher had taught the idea of ‘ill-conditioned’ as part of understand-
ing about solving equations and the errors that can occur he would want to find
the question classified under terms such as ‘solution of equations’. Now the
SEFI Core Zero syllabus, [2], operates at the school-university interface level
and has syllabus items that can be described by the classifications:
Algebra/Linear Laws/understand the terms ‘gradient’ and ‘intercept’ with
reference to straight lines
Algebra/Linear Laws/recognise when two lines are parallel
Algebra/linear Laws/obtain the solution of two simultaneous equations
in two unknowns using graphical and algebraic methods
These are all very much in the right area but incomplete as there is no refer-
ence in [2] to ill conditioning. If we introduced aclassification under SEFI
Core Zero syllabus about ill-conditioning then one could no longer set tests
from it knowing that all questions were in the syllabus. The right response is
probably to say that one must know about ill-conditioned matrices to answer
this question and classify the question as:
Maths/Algebra/ Linear Algebra/Matrices/Matrix
properties/ill-conditioned (4.1.1)
It is possible that people would want to teach about ill conditioned equations
in a way that does not involve looking at matrices at all, so a classification:
Maths/Algebra/ Linear Algebra/linear equations/
ill-conditioned equations (4.1.2)
may also be viable.
Because ofthe need to be comprehensible to the target audience: teachers and
university lecturers, we have adopted a twin approach to classificationof math-
ematics questions. Firstly, we have used the SEFI Core Zero curriculum for
engineering mathematics [2] edited by Leslie Mustoe and Duncan Lawson. As
part ofa Leonardo de Vinci project called Athena 2000/1, a set of questions
based on theclassification in [2] was generated and aTAL test on these ques-
tions was taken by students in different European countries. This syllabus is
useful for both school sixth forms and some first year university work. Sec-
ondly, we have adapted the Eric Weisstein’s World of Mathematics classifica-
tion [3]. This classificationis aimed at helping users to find the meaning of
You will notice that there isa citation order used here where “max;min; sta-
tionary points” comes before “Model Application” or
“SimpleModelConstruction”. The normal rule in classification schemes is to
place “general” classifications before “special” ones. However
“SimpleModelConstruction” isa more general classification than “max;min;
stationary points” as it could be applied to all sorts of problems not just differ-
entiation problems, but we need the citation order to relate to the way in which
the teacher will think. Normally a teacher will choose to teach differentiation
and then when the subject is well established will start to ask students to use
the process of differentiation on “real” problems involving some sort of model.
Rather arbitrary decisions have been made, as a teacher might want to teach
modelling and then choose what sort of mathematical techniques could rea-
sonably be used by the students. At present we allow two classifications for
each question so it would be possible to implement both approaches.
The classification, as you can see, is mixing together a subject classification,
“Mathematics/Calculus & Analysis/calculus/differentiation/”, with a problem
type description such as “practical application” and the sort of application,
“speed;acceleration” and then sometimes there is information about the sort of
functions that are used in the question, e.g. trigonometric functions. The test
setter needs to be able to specify questions using all these sorts of data, but a
single tree structure containing all this is not ideal. A more general approach
to classifying questionsis described in section 5.
4.1 Practical classification problems
In this section example questions are displayed and then an approach to as-
signing aclassificationis discussed.
Example 1
Two linear equations are said to be ‘ill-conditioned’ when:
Answer 1: They cannot be solved
Answer 2: They represent two parallel straight lines
Answer 3: They represent two nearly parallel straight lines
Answer 4: They represent lines of which one is parallel to either the x
or y axis
This question is posed in terms of equations and the answers are all in terms of
the geometric understanding ofthe equations. In the mathematical literature
however, the term ‘ill-conditioned’ applies to matrices rather than sets of equa-
11
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
10
Jon Sims Williams and Mike Barry
Answer 2: The adjoint and the minor ofthe element a
ij
are identical
when i + j is odd
Answer 3: The cofactor and the minor ofthe element a
ij
are identical
when i + j is odd
Answer 4: The cofactor and the minor ofthe element a
ij
are identical
when i + j is even
This question needs to be classified under both minors and adjoints, but in the
classification hierarchy we have:
Algebra/Linear Algebra/Determinants/minor or cofactor
and
Algebra/Linear Algebra/Matrices/Matrix Operations/Adjoint
because the adjoint isthe transpose ofthe matrix obtained by replacing each
element a
ij
by its cofactor A
ij
so it isa matrix operation.
So once again we need to decide on a major classification and then have sub-
classifications to show what additional knowledge is required. This question is
easy to sort out as one cannot fully understand it without understanding Adjoints
and one cannot understand Adjoints without understanding cofactors, so classi-
fying the question under Adjoints only is satisfactory.
This example points out a problem that questions often cross many boundaries
in a simple subject classification. We need to find aclassification scheme that
allows a teacher to pick out questions that are on the topic being taught, but
then finds it easy to remove questions that require additional knowledge that
has not yet been taught.
It should be clear from these examples that theclassificationofquestionsis a
complex process. When it has been well done however, the setting of tests is
easy. Because the process is so complex we need a team working together
rather than rely on one or two people.
The classification, as you can see, is mixing together a subject classification,
“Mathematics/Calculus & Analysis/calculus/differentiation/”, with a problem
type description such as “practical application” and sort of application,
“speed;acceleration” and then sometimes there is information about the sort of
functions that are used in the question, e.g. trigonometric functions. The test
setter needs to be able to specify questions using all these sorts of data, but a
terms. So when we look up Linear Equation we find a definition in terms of y
= ax +b. Now, a part of what a teacher wants in terms ofquestions about
Linear Equations will be the definition; but s/he will also want to know how to
solve them and what goes wrong when you solve them etc. So we have built
upon the Weisstein’s classification in accordance with our experience of teachers
needs. Part of both the SEFI and adapted Weisstein’s syllabus are shown in the
Appendices.
The system allows multiple classifications so both ofthe interpretations (4.1.1)
and (4.1.2) are implemented. If you look at the Weisstein’s classification on
the Web you will find that (4.1.2) does not exactly fit.
Example 2
The equation ofthe straight line that passes through the point (1,2,3) and
is parallel to the line joining the points (2,3,1) and (5,4,2) is?
This question requires two ideas: the student needs to be able to form Displace-
ment Vectors, so that s/he knows the direction ofthe line and needs to be able
to write down the equation ofa straight line using its direction and a point
through which is passes. So there are two possible classifications: Displace-
ment Vectors and Equations of lines. This question is really about equations of
lines and just incidentally requires one to know how to form displacement
vectors, so we would classify the question as:
Algebra/vector algebra/equations in vector form/equations of lines
But students also need to know about the displacement vectors so we put this
underneath equations of lines as other ‘required knowledge’.
Algebra/vector algebra/equations in vector form/equations of lines/dis-
placement vectors
A new scheme has been invented to avoid treating ‘required knowledge’ in this
way but it is not yet fully implemented.
Example 3
For an n x n matrix A, which ofthe following statements is true?
Answer 1: The adjoint and minus the minor ofthe element a
ij
are identi-
cal when i + j is even
13
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
12
Jon Sims Williams and Mike Barry
Application Area - this tree allows the editor to describe the application
area e.g. physics/motion/speed;velocity.
In addition we use the following lists, where only one selection is made from
each list:
1. Modelling level: this classification only applies to certain questions. We
distinguish the following modelling levels:
• Simple Model Construction -a simple model must be constructed to
complete the question.
• Model Application -a model is provided as part ofthe question and
must be used
• Practical Application - this indicates that the student is expected to be
able to use the particular mathematics described in the Subject tree
pointer in a practical situation. Implicitly this will usually mean that
they have to use a standard model used in this sort of application.
2. Evaluation Type: This takes one of two values: Numeric or Algebraic, and
tells us if the question requires the answer in a numeric or algebraic form.
3. Bloom’s Educational Objectives: http://www.mathematicsweb.org/
mathematicsweb/show/Index.htm are aclassificationofthe type of
question thus:
• Knowledge: states that the question simply checks if the student knows
something factual.
• Comprehension: usually refers to a question in which data is given and
the question checks if the student understands what it means.
• Application: The use of previously learned information in new and
concrete situations to solve problems that have single or best answers.
• Analysis: The breaking down of informational materials into their
component parts, examining (and trying to understand the
organizational structure of) such information to develop divergent
conclusions by identifying motives or causes, making inferences, and/or
finding evidence to support generalizations.
• Synthesis: Creatively or divergently applying prior knowledge and skills
to produce a new or original whole.
• Evaluation: Judging the value of material based on personal values/
opinions, resulting in an end product, with a given purpose, without real
right or wrong answers.
Lastly the numeric field is used to describe how theoretical the question is.
This is very important, as a question posed in an abstract manner on a simple
tree structure containing all this is not ideal. A more general approach to
classifying questionsis described in the next section.
5. A General Classification Scheme
TAL is conceived as a question database for many different subjects, so we
have developed adatabase structure that allows each subject area to have its
own subject classification Essentially each question is classified under one or
more top level subjects e.g. mathematics and medicine. Then for each of
these top-level subjects the subject editor is able to specify three lists, three
trees and a numeric to describe questions. Thequestions can have multiple
descriptors taken from these lists and trees. Each list has a subject specific title
so that the users understand the interface in the language suited to the subject
area. So in mathematics we are interested in the set of function types used in
a question, but in chemistry they are not. Instead a list of question descriptors
has been proposed:
Question_descriptor = { Law or definition, Nomenclature, Qualitative test,
Reactions & reactivity, Structure, etc}
Rather than go into the details ofthe computer representation of this, the scheme
will be described from a mathematical viewpoint. The scheme described be-
low has been designed and built by a student but as yet it is not implemented
in the production system.
5.1 The Mathematics Subject Scheme
For mathematics we have selected three trees called:
Subject tree - for the subject tree we have adopted two schemes: one
based on Eric Weisstein’s World of Mathematics classification [3], this is
the same as has been described in most ofthe examples above. There are
considerable benefits from having aclassification that is related to a syl-
labus and so in the ATHENA project we used the SEFI syllabus, [2]. If you
use the SEFI syllabus you know that thequestions you find are in the
syllabus.
Function Type - this allows questions to be described in terms ofthe types
of functions used e.g. {simple algebraic functions, trigonometric func-
tions, exponential/log functions, inverse/hyperbolic functions}. Since this
structure isa tree it allows a fine description of function types although
only types given in the list are used in classification at present.
15
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
14
Jon Sims Williams and Mike Barry
place questions that the user does not find appropriate. However it would be
desirable to be able to control the ‘facility’ and perhaps the ‘time_to_do’ of
some ofthe question slots in the test; so we propose to allow the user to specify
the minimum facility and maximum length ofquestions in a question slot.
Finally the test setter can select the average facility and length ofthe whole
test and how accurately these two conditions must be satisfied. So the test can
be specified as running for an hour plus or minus 5 minutes and similarly the
average facility can be required to lie in the range (55,65) for example. The
test compiler will then find all thequestions that match the specifications for
each question slot and try to generate the number of tests requested.
6. Conclusions
The TAL system has been running since 1997. Initially a very small number of
questions were available to set mathematics tests but now we have nearly
2000 questions mainly suited to first year university mathematics. There are
another 2000 questions on other subjects. Automatic test setting is wonderful
in that it saves considerable staff time and the availability of multiple tests
allows students to get practice and be motivated. It is however easy to be
sloppy about setting tests automatically and we have proposed some addi-
tional controls in section 5 to make it easier to build appropriate tests. For the
TAL database to be a really good resource it needs editors for every sub- area of
mathematics to maintain the standards and encourage development of new
questions. It is doubtful if any university can really maintain such a resource
and so a collaborative effort is needed with several universities collaborating
in its development.
Appendix A-A part ofthe SEFI classification
Analysis and Calculus
Rates of change and differentiation
Average & instantaneous rates of change
Definition of derivative at point
Derivative as instantaneous change rate
Derivative as gradient at a point
Difference between derivative & derived function
Use notations: dy/dx, f(x), y etc.
Use table of simple derived functions
Recall derived functions for simple fns
Use multiple & sum differentiation rules
subject may be quite inappropriate. The scheme used for this Theoretical Nu-
meric is:
0 or 1 a pure numbers question
2 the question is about basic rules or nomenclature ofthe subject
3 or 4 there isa simple parameter in the question
5 or 6 the question requires a simple theoretical understanding of the
subject
7,8 or 9 the question requires a proof of some theoretical property
We have had very little experience in using these although quite a number of
questions have been classified on this scale.
5.2 How to specify a question slot
Section 3 describes the way tests are set with the existing system. In this
section we describe the use ofthe new classification scheme. Tests are set by
specifying a number of question slots sufficient to form a test ofthe required
length. If thequestions used have an average time_to_do of 5 minutes then 12
question slots will be needed to form a test that is an hour long. Each question
slot is defined by specifying:
• The subject or a disjunction of subjects from the subject tree
• The set of permitted function types
• If the question is to be an application problem then choose the application
area
• If the problem isa modelling type of question then choose the level of
modelling
• The Bloom category for the question and the enumeration type {numerical
or algebraic} can be set
• Finally a constraint on the level ofthe theoretical numeric can be set.
When the question slot has been specified thedatabaseis searched to find all
the questions that satisfy the specification and the additional subject knowl-
edge that is implicit in the set ofquestions are displayed. The test setter may
then select any of these subject areas to exclude them the test.
From our experience of using automatic test setting we find that just control-
ling the content of question slots is not sufficient for good test design. At
present the system allows the user to look at each question in each ofthe tests
set and, provided there are spare questions, to swap spare questions in to re-
17
TAL -ANationalDatabaseofQuestions – Classificationisthe Key
16
Jon Sims Williams and Mike Barry
Theory
Total differential
Vector Derivative
Increasing & Decreasing
Monotone Decreasing
Monotone Increasing
Monotonic Function
Etc.
Mean-Value Theorems
etc.
9. References
[1] TAL – the Bristol University “Test and Learn System” at
www.tal.bris.ac.uk.
[2] The SEFI core curriculum for engineering mathematics http://
learn.lboro.ac.uk/mwg/core.html
[3] Eric Weisstein’s World of Mathematics classification http://
mathworld.wolfram.com/
[4] MathematicsWeb - http://www.mathematicsweb.org/mathematicsweb/
show/Index.htt this is basically about Journals but has a 3 level
classification e.g. maths/applied maths/numerical analysis.
[5] Zentralblatt MATH - http://www.emis.de/ZMATH/
[6] 2000 Mathematics Subject Classification http://www.ams.org/msc/.
[7] Bloom’s Educational Objectives http://www.mathematicsweb.org/
mathematicsweb/show/Index.htt
Use the product rules of differentiation
Use the quotient rules of differentiation
Chain rule for differentiation
Relation of gradient and derivative
Equation of tangent & normal to graph
Stationary points, maximum and minimum values
Find if function is.increasing using differential
Define a stationary point ofa function
Distinguish turning and stationary point
Locate a turning point using derivative
Classify Turning Points by 1st derivative
Find second derivative of simple functions
Classify stationary Points by 2nd derivative
Appendix B -A part ofthe General Classification Scheme
Calculus and Analysis
Calculus
Differentiation
Chain;product;ratio Rules
Differentiability
Directional derivative
Equations of Lines
Function of function rule
Gradients;slopes in R2
higher derivatives
Implicit;log differentiation
Introductory
Maclaurin series
Max;min or stationary points
Global Maximum
Inflexion point
Local Minimum
Saddle point
Numerical differentiation
Parametric equations
Partial Derivative
Second derivative test
Tangents;normals
Taylors series
. ‘ill-conditioned’ applies to matrices rather than sets of equa- 11 TAL - A National Database of Questions – Classification is the Key 10 Jon Sims Williams and Mike Barry Answer 2: The adjoint and the minor of. redundant as the database contains questions on other subjects. A semicolon is interpreted as ‘and/or’. 9 TAL - A National Database of Questions – Classification is the Key 8 Jon Sims Williams and. minor of the element a ij are identi- cal when i + j is even 13 TAL - A National Database of Questions – Classification is the Key 12 Jon Sims Williams and Mike Barry Application Area - this tree