In order for the system towork, a white answer sheet must be read to sample regions of interest, such as thesubject code and the student code.. upi and partners [18] havefocused on solvi
Trang 1VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITYUNIVERSITY OF INFORMATION TECHNOLOGY
ADVANCED PROGRAM IN INFORMATION SYSTEMS
DO TUAN KIET - 15520398
BUILDING A MOBILE BASED MULTIPLE-CHOICE
TEST GRADING SYSTEM
BACHELOR OF ENGINEERING IN INFORMATION SYSTEMS
HO CHI MINH CITY, 2021
Trang 2VIETNAM NATIONAL UNIVERSITY HO CHI MINH CITY
UNIVERSITY OF INFORMATION TECHNOLOGY
ADVANCED PROGRAM IN INFORMATION SYSTEMS
DO TUAN KIET - 15520398
BUILDING A MOBILE BASED MULTIPLE-CHOICE
TEST GRADING SYSTEM
BACHELOR OF ENGINEERING IN INFORMATION SYSTEMS
HO CHI MINH CITY, 2021
Trang 3ASSESSMENT COMMITTEE
The Assessment Committee is established under the Decision , date
"“ by Rector of the University of Information Technology.
- Chairman.
=— - Member
Trang 4First of all, we would like to express our appreciation to Doctor NGUYEN THANH
BINH for his time and guidance during the making of this thesis His teaching has
greatly influenced our works and helps us change in positive ways.
Our top positive receptions also go to all the members of Faculty of Information Systems as well as everyone of University of Information Technology for their guidance, supports to us with greatest care.
Not the least, we feel an extreme need of showing our gratitude to our family, our
friends, and our classmates for every support and love that we have received on our
maturity path.
December, 2021.
Do Tuan Kiet - students of aep 2015.
Trang 5TABLE OF CONTENTS
Chapter 1 3
PROBLEM STATEMENT 3
1.1 Introduction 31.2 Related Researches 4
2.2.1 Overview 13
2.2.2 OpenCV on Android 14
2.2.3 Transformation of Affines 15
Chapter 3 22
OUR SUGGESTED PROCESSING 22
3.1 About the environment 223.2 The proposed processing flow 25
Trang 64.2 Detail of Design 33
4.2.1 Exam marking model 33
4.2.2 Data structure: 34 Figure 4.8 Data structure 34
4.2.3 UI/UX and Features 34Figure 4.9 Application features tree 354.3 Advantage of application 48Chapter 5 49
CONCLUSION AND FUTURE WORK 49
5.1 Conclusion 495.2 Future Projects 50References 50
World War Jin 1914.
Nowadays, MCQs is very popular, which is used by almost all universities and
high schools to evaluate and rating students Although automatic multiple-choicemarking support systems are also very popular, specialized equipment systems arevery expensive, low customization, computer-based systems combined with goodquality scanners, high price, only suitable for organizations, big exams, due tolimited mobility, complicated use
Trang 7In the 4.0 era, owning a phone with a built-in camera was not difficult, and the
image quality from the camera has also improved compared to many years ago.Combined with artificial intelligence and computer vision techniques, it will help
us integrate into mobile applications easily and save money and time
Therefore, this graduation thesis with the idea of creating a multiple-choice
marking application, to support teachers in marking exams easier, more flexibleand cost effective
1.2 Related Researches
Specialized automated multiple-choice scoring systems were invented veryearly on The first commercially available dedicated scoring machine was from IBM,code-named IBM 805, in 1937 [8] It is a breakthrough in educational technology.The 805 machine reads pencil marks by sensing graphite because graphite makes a
connection when it comes into contact with electricity There is a key table to
determine whether the answer is correct or incorrect based on the connectionpoint of the shaded answer The machine needs human help because it cannotscore multiple answer sheets automatically Then in the early 1960s, the IBM 805was replaced by a new technology called optical mark recognition (Optical MarkRecognition - OMR) (IBM 1230) [9] IBM implemented the first OMR successfullydesigned by Everett Franklin
Lindquist Lindquist's proposed mechanism is based on the contrast difference
of the light beam at the marked positions on the answer sheet to recognize whichanswer is selected The machine recognizes the highlighted position because itreflects less light than the unmarked locations on the answer sheet [10] This is thepremise of OMR technology Today, besides IBM, several other companies also
provide scoring machines with the same idea and have a huge market share, such
Trang 8the product and maintenance costs are high, and the cost of the Scantron machine
varies depending on the machine model They range in price from $5,400 to
$17,275 [13].In addition to the high cost of equipment, these specialized scoring
machines require dedicated answer sheets, which cost $0.15 per sheet, and pencolors and answer sheet templates are limited
Over the past two decades, personal computer (PC)-based OMR systems usingimage processing technology have been developed to overcome the limitations ofspecialized OMR machines Sandhu, Singla and Gupta [14] proposed using opticalcharacter and symbol region recognition as the basis to achieve a new method ofOMR system for scoring multiple-choice MCQs The new solution is cost-effective,fast, and easy to customize the answer sheet format
In 1999, Chinnasarn and Rangsanseri [15] developed the first PC-basedgrading system that reads the answer sheet through images obtained fromscanning paperwork with a conventional optical scanner In order for the system towork, a white answer sheet must be read to sample regions of interest, such as thesubject code and the student code Then, the answer sheets will be processed based
on a learned answer sheet model.
T.Nguyen and his partner [16] have developed a reliable algorithm to usecameras instead of optical scanners with the aim of simplifying multiple-choicemarking The article has demonstrated that collecting answer sheets via camera is
faster and more portable than using optical scanners because of its smaller and
lighter size upi [17] has developed an open-source grading system based on theJava language The application is designed to grade two sets of test answer sheets:one is a set of traditional MCQs with only the option to answer without questions;the other is a set of samples where the questions will be integrated into the answersheet but with a smaller number of multiple-choice questions The application
Trang 9provides two forms of encoding and identifying student code information; the
different codes are based on barcode and matrix upi and partners [18] havefocused on solving the problem of identifying student identifiers using a matrix inthe same answer sheet to achieve a 100% recognition rate despite the image beingrotated and having a large deviation Next, Bonai and partners [19] continued upi's
research on decoding student identification information through the method of
applying Optical Character Recognition (OCR) to digital recognition recorded ona7-segment tape format (7 segments) A pattern consisting of a display of 7
segments for each digit is designed for candidates to color The positions of digits
and segments in each number are predefined, allowing numbers from 0 to 9 to berecognized based on input patterns Although the method seems limited because it
is based on painting instead of handwriting, it is still more intuitive and easier toapply than the matrix identifier to encode the candidate code and give the success
rate result greater than 90%Furthermore, upi [20] devised a method by which
candidates could change their answers two or three times if they got it wrong Ifacandidate changes their answer, they can simply annotate the error with a circle
and write the correct letter next to the answer area Then during processing, if a
circle error is filled in, the handwritten character will be recognized, whether it is
an A, B, C, E, or F character
Sattayakawee [21] proposed three different versions of grid-structured
answer sheets that yielded an average accuracy of up to 99.9% Her method relies
on ticks instead of filling in the answer box Chai [22] designed an automaticscoring algorithm that focuses on the results from the feedback The proposedmethod of scoring the paper answer sheet then notes on the board the scannedimage by marking the correctness of the answer and sending this result sheet tothe candidate via email The results show that the method is fast, with up to 1.4
Trang 10image processing operations, pixel linear statistics to find the location of the filled
answer [25] and use a neural network to identify image regions at the answerposition after it has been cut out Although [25] gives good results, it can recognizemany types of responses, but the speed is slow due to running through eachpartition to be cut
The article proposes a method of marking MCQs automatically using phonecameras and answer sheet templates similar to those used in recent national
graduation exams Similar to the recent papers, however, instead of applying the
processing steps based entirely on pure image processing like [23] [24] with thelinear counting method to find the answer marker position or using a neuralnetwork to recognize each answer symbol after it has been cut [25], we build adeep learning neural network with the YOLO 4 foundation to locate all the answersfilled The algorithm is optimized to run on mobile phones
1.3 Motivation
Our first goal is to build a MOBILE-BASED MULTIPLE-CHOICE TEST GRADING
SYSTEM that will help the teacher mark the multiple choice faster with an androidapplication combined with openCV We consider the UX of the user carefully; what
is special is how the user can interact with the application with the least amount of
learning time The future goal is to successfully release a stable version ofthis application to the Play Store Due to time constraints, this thesis only focuses
on the first goal
1.4 Contributions
- We consider the related applications [6] [7] on the Play store to create the
design UI of our own application
- We created multiple filled-in forms for users to select from (60 or 100
questions)
Trang 11- Wealso have the option of a single or multiple choice filling form.
- Users can view statistics and share the results of the exams on social
networks
- When the amount of data increases, users can save the result to the Shared
social feature and clean device storage
Chapter 2
BACKGROUND KNOWLEDGE
2.1 Flutter
2.1.1 Introduction
There are many ways to build a mobile app, such as:
- Native code: Java/Kotlin for Android and Swift/Objective-C for iOS
- Hybrid: React Native: Hybrid apps are deployed in a native container thatuses a mobile WebView object
- Cross-platform: Flutter with Dart language-Flutter uses Dart and a
collection of native widgets to create stunning cross-platform apps.
This application was developed by Flutter.
Why Flutter?
Flutter is a cross-platform SDK for the Dart programming language.It can bebuilt for iOS, Android, Mac, Windows, Linux, and the web with only one code-baseand has Google backing
Applications built with Flutter are virtually indistinguishable from those builtusing the Android SDK, both in look and performance Moreover, with small tweaks,
Trang 12Unified App Development: Flutter has tools and libraries to help us easilybring our ideas to life on iOS and Android If you re new to mobile development,Flutter is an easy and fast way to build stunning mobile apps If you are anexperienced iOS or Android developer, you can use Flutter for your views andleverage a lot of your existing Java/Kotlin/ObjC/Swift code.
Beautiful and expressive UI: Indulge our users with Flutter's beautiful built-inwidgets by Material Design and Cupertino (iOS-flavor), rich motion APIs, smooth
natural scrolling, and self-aware communication.
The interface runs at 60 fps
Apps created with Flutter perform much better than apps created with othercross-platform development frameworks such as React Native and Ionic Here areafew reasons why we might be interested in Flutter:
Flutter uses Dart, a fast, object-oriented language with many useful featureslike mixins, generics, isolates, and static types
Flutter has its own UI elements along with a mechanism to render them on the
Android and iOS platforms Most user interface components conform to Material
Design guidelines out-of-the-box
Flutter applications can be developed using IntelliJ IDEA, an IDE very similar
to Android Studio
Currently, there are many languages that support cross-platform One bigname to mention is React Native Companies such as UberEats, Discord, andFacebook have shifted to React Native as part of the technological shift.But Googledidn't give up either They noticed React Native's reach After 2 years of waiting,
Google released the alpha version of Flutter.
Trang 13e Support for hot reloading:
Usually with Android programming using Android Studio Every time wechange a line of code, we have to rebuild and run the application again Hot reloadwill save us from having to rebuild the application and just reload the screen that
has changed the code This will save a lot of time for the developer.
e Code structure
In contrast to React Native, Flutter does not separate data, styles, andtemplates I know you might find it weird if you're used to React native However,this approach is also convenient and widely accepted Flutter doesn't needadditional interface languages like JSX or XML or special tools to create layouts
Using Flutter, you can save time by not having to switch from design mode to
code and vice versa Flutter lets you do everything on one screen In addition, all
the necessary tools are accessible from the same location
e Development environment settings
Flutter simplifies the installation process This framework also provides auseful tool for system error checking called "Flutter Doctor."
e Performance:
Flutter is an architecture that allows us to build fast-performing native
applications Because Flutter doesn't need a bridge, it can work much faster.Therefore, Flutter can run animations at 60 fps
2.1.2 Advantage of Flutter
e Fast Development: The Hot Reload feature works in milliseconds to show
Trang 14In addition, Hot Reload also helps you add features and fix bugs to save more time
without rebuilding the source code
e Low cost: with Flutter, you can develop the application with one
code-base, which means you only need one developer instead of two developers forAndroid and iOS
e Expressive and Flexible UI: There are a lot of components to build
Flutter's interface beautifully in Material Design and Cupertino styles, supporting avariety of motion APIs, smooth scrolling,
e Native Performance: Flutter widgets combine platform differences such
as scrolling, navigation, icons, and fonts to deliver the best performance on iOS andAndroid
e open-source Both Flutter and Dart are open-source and free to use, andprovide extensive documentation and community support to help out with any
issues you may encounter
2.1.3 Flutter's disadvantage
o The application is large in size,
o There is no web app
o Thereis no Android TV and no Apple TV
2.1.4 Important components in Flutter
o Widget
The widget is responsible for forming the structure of the diagnostic tree (adiagnostic tree is a tree-like data structure that supports the definition of theinterface structure drawn in Flutter applications), while the class element is
Trang 15responsible for managing the diagnostics handle the state of each widget in that
tree The basic idea of Flutter is similar to that of a web application When the state
of the data changes, such as when the user switches screens or changes data on thescreen, the screen is reflected by deleting old widgets and drawing on new widgets.Therefore, instead of having to know about StoryBoards (in iOS) or Activities (in
Android), Flutter brings all the concepts related to the interface into a single
concept called "widget." The state management of widgets is also brought back tothe programmer This difference is also more noticeable when Dart is a
component-based language In Flutter, when we want to adjust the state of a
widget's size or position, we don't do so in the current widget but are encouraged
to wrap the widget in a widget that does just that This makes the property "InFlutter, everything is a widget” more standard than ever The screen is a widget, theinterface element is a widget, even the layout information is a widget
StatefulWidget This property is the instance of StatefulWidget that created that
state As a result, from within the statelt is possible to get the values passed insidethe StatefulWidget instead of having to pass them through the constructor.Stateless
o StatelessWidget
Trang 16widgets are widgets that do not contain any state All values of StatelessWidgetare final, so they cannot be changed at runtime StatelessWidget only shows what ispassed in the constructor.
Conversely, to change the state of the widget, we use the Stateful widget
2.2 OpenCV and Algorithms
OpenCV [OpenCV] is an open source (see http://opensource.org) computervision library available from http://sourceforge.net/projects/opencvlibrary Thelibrary is written in C and C++ and runs under Linux, Windows, and Mac OS X.There is active development on interfaces for Python, Ruby, Matlab, and otherlanguages
2.2.1 Overview
OpenCV was designed for computational efficiency and with a strong focus onreal-time applications OpenCV is written in optimized C and can take advantage ofmulticore processors
Most computer scientists and practical programmers are aware of some facet
of the role that computer vision plays But few people are aware of all the ways inwhich computer vision is used For example, most people are somewhat aware ofits use in surveillance, and many also know that it is increasingly being used forimages and video on the Web A few have seen some use of computer vision in
game interfaces Yet few people realize that most aerial and street-map images
(such as in Google’s Street View) make heavy use of camera calibration and imagestitching techniques Some are aware of niche applications in safety monitoring,unmanned flying vehicles, or biomedical analysis But few are aware of howpervasive machine vision has become in manufacturing: virtually everything that is
Trang 17mass-produced has been automatically inspected at some point using computer
vision.
Since its alpha release in January 1999, OpenCV has been used in manyapplications, products, and research efforts These applications include stitching
images together in satellite and web maps, image scan alignment, medical image
noise reduction, object analysis, security and intrusion detection systems,automatic monitoring and safety systems, manufacturing inspection systems,camera calibration, military applications, and unmanned aerial, ground, andunderwater vehicles It has even been used in sound and music recognition, wherevision recognition techniques are applied to sound spectrogram images OpenCVwas a key part of the vision system in the robot from Stanford, "Stanley," which won
the $2M DARPA Grand Challenge desert robot race [Thrun06].
Trang 18e The sdk/java folder contains an Android library Eclipse project providing
the OpenCV Java API that can be imported into the developer’s workspace;
e The sdk/native folder contains OpenCV C++ headers (for JNI code) andnative Android libraries (*.so and *.a) for ARM-v5, ARM-v7a and x86 architectures;
« The sdk/etc folder contains the Haar and LBP cascades distributed with
OpenCV
e The apk folder contains Android packages that should be installed on the
target Android device to enable OpenCV library access via the OpenCV Manager API
(see details below)
e The samples folder contains sample application projects and their prebuilt
packages (APK) Import them into an Eclipse workspace (as described below) and
browse the code to learn about possible OpenCV applications for Android.
e The doc folder contains various OpenCV documentation in PDF format It’s
also available online at [27]
2.2.3 Transformation of Affines
An affine transformation is any transformation that can be expressed in theform of a matrix multiplication followed by a vector addition In OpenCV, thestandard way of representing such a transformation is as a 2-by-3 matrix
It is easily seen that the effect of the affine transformation A X + B is exactly
equivalent to extending the vector X into the vector X’ and simply left-multiplying X’
by T
Affine transformations can be visualized as follows: Any parallelogram ABCD
in a plane can be mapped to any other parallelogram A "B" "C" "D" by some affine
Trang 19affine transformation is defined uniquely by the three vertices of the two
parallelograms If you like, you can think of an affine transformation as drawingyour image into a big rubber sheet and then deforming the sheet by pushing orpulling on the corners to make different kinds of parallelograms
There are two situations that arise when working with affine transformations.
In the first case, we have an image (or a region of interest) we’d like to transform;
in the second case, we have a list of points for which we’d like to compute the result
of a transformation.
e Dense affine transformations
In the first case, the obvious input and output formats are images, and theimplicit requirement is that the warping assumes the pixels are a denserepresentation of the underlying image This means that image warping mustnecessarily handle interpolations so that the output images are smooth and looknatural The affine transformation function provided by OpenCV for densetransformations is cvWarpAffine()
Trang 20const CvMat* map_matrix,
int flags = CV TNTER LINEAR | CV_WARP FILL OUTLIERS,
CvScalar fillval = cvScalarA11(0)
)
Here, src and dst refer to an array or image, which can be either one or threechannels and of any type (provided they are the same type and size) (* Themap_matrix is the 2-by-3 matrix we introduced earlier that quantifies the desiredtransformation The next-to-last argument, flags, controls the interpolation method
Trang 21as well as either or both of the following additional options (as usual, combined
with Boolean OR)
CV_WARP _FILL OUTLIERS
Often, the transformed src image does not fit neatly into the dst image—thereare pixels "mapped" there from the source file that don’t actually exist If this flag is
set, then those missing values are filled with fillval (described previously).
CV_WARP_INVERSE_MAP
This flag is for convenience to allow inverse warping from dst to src instead offrom src to dst
cvWarpAffine performance
It is worth knowing that cvWarpAffine() involves substantial associated
overhead An alternative is to use cvGetQuadrangleSubPix() This function has
fewer options but several advantages In particular, it has less overhead and canhandle the special case where the source image is 8-bit and the destination image
is a 32-bit floating-point image It will also handle multichannel images.
void cvGetQuadrangleSubPix(
const CvArr* src, CvArr* dst,
const CvMat* map matrix
);
What cvGetQuadrangleSubPix() does is compute all the points in dst by
mapping them (with interpolation) from the points in src that were computed byapplying the affine transformation implied by multiplication by the 2-by-3
map_matrix (Convergence of the locations in dst to homogeneous coordinates for
Trang 22One idiosyncrasy of cvGetQuadrangleSubPix() is that there is an additional
mapping ap- plied by the function In particular, the result points in dst arecomputed according to the formula:
Observe that the mapping from (x, y) to (x, y) has the effect that the points in
the destination image at the center will be taken from the source image at the
origin, even if the map-ping M is an identity mapping.If cvGetQuadrangleSubPix()needs points from outside the image, it uses replication to reconstruct those values
Computing the affine map matrix
OpenCV provides two functions to help you generate the map_matrix The first
is used when you already have two images that you know to be related by an affine
transforma-tion or that you’d like to approximate in that way:
CvMat* cvGetAffineTransform(
const CvPoint2D32f* pts_ src, const CvPoint2D32f* pts dst,
CvMat* map_matrix
);
Here, src and dst are arrays containing three two-dimensional (x, y) points,
and the map_matrix is the affine transform computed from those points
Trang 23The pts_src and pts_dst in cvGetAffineTransform() are just arrays of three
points, defining two parallelograms The simplest way to define an affine transform
is thus to set pts_src to three corners in the source image—for example, the upperand lower left together with the upper right of the source image The mapping fromthe source to the destination image is then entirely defined by specifying pts_dst,
the locations to which these three points will be mapped in that destination image.
Once the mapping of these three independent corners (which, in effect, specify a
"representative" parallelogram) is established, all the other points can be warpedaccordingly
The second method is to use cv2DRotationMatrix(), which computes the map
matrix for a rotation around an arbitrary point along with optional rescaling This
is just one possible kind of affine transformation, but it represents an important
subset that has an alternative (and more intuitive) representation that’s easier to
work with in your head:
CvMat* cv2DRotationMatrix(
CvPoint2D32f center,
double angle, double scale, CvMat* map_matrix
);
The first argument, center, is the center point of the rotation The next two
arguments give the magnitude of the rotation and the overall rescaling The finalargument is the output map _matrix, which (as always) is a 2-by-3 matrix offloating-point numbers If we define o scalecos (angle) and B scalesin (angle), thenthis function computes the map_matrix to be:
Trang 24a ÿ (1—Ø)-center —-center,
-B a ÿ-center +(I—Ø)-center,
You can combine these methods of setting the map_matrix to obtain, for
example, an image that is rotated, scaled, and warped
Sparse affine transformations
We have explained that cvWarpAffine() is the right way to handle densemappings For sparse mappings (i.e., mappings of lists of individual points), it is
best to use cvTransform():
void cvTransform(
const CvArr* src, CvArr* dst, const CvMat* transmat, const CvMat* shiftvec = NULL
In general, src is an N-by-1 array with Ds channels, where N is the number ofpoints to be transformed and Ds is the dimension of those source points Theoutput array dst must be the same size but may have a different number ofchannels, Dd The transforma-tion matrix transmat is a Ds-by-Dd matrix that is thenapplied to every element of src, after which the results are placed into dst The
optional vector shift vec, if non-NULL, must be a Ds-by-1 array, which is added to
each result before the result is placed in dst
In the case of an affine transformation, there are two ways to use
cvTransform() that depend on how we'd like to represent our transformation In
the first method, we decompose our transformation into the 2-by-2 part (which
does rotation, scaling, and warping) and the 2-by-1 part (which does thetransformation) Here, our input is an N-by-1 array with two channels transmat is
Trang 25our local homogeneous transformation, and shift vec contains any needed
displacement The second method is to use our usual 2-by-3 representation of theaffine transformation In this case, the input array src is a three-channel arraywithin which we must set all third-channel entries to 1 (i.e., the points must besupplied in homogeneous coordinates) Of course, the output array will still be a
two-channel array.
Chapter 3
OUR SUGGESTED PROCESSING
3.1 About the environment
e Install and use the following:
1 Download the application on an Android device
2 Print the template using the sharing feature The application supports 2
templates to download; they are 60 questions maximum and 100 questionsmaximum After that, the student will fill in the inner circle with a pencil on theanswer sheet
3 The user opens the application and creates a class and an exam One
warning is that the user has to choose the corresponding template before it isprinted at step 2
4 After creating the exam, the user will create the answer key for each exam
code The user can also scan to fill in the answer key or fill it out by hand.
5 Start marking by scanning the student’s answer sheet
e The template has three main parts:
Trang 26The student has to fill in this field (Figure 3.1) to determine the student's
exam The order is left to right, with the corresponding number by index
Trang 27KEY CODE
FILLING ID NUMBER AND KEYCODE
Figure 3.2 Exam code
e Student’s answer:
The student has to fill the student’s answer (Figure 3.3) The student’s score
will be based on the correct answer compared to the answer key The order also is
left to right with corresponding numbers by index
Trang 283.2 The proposed processing flow
image margin of answer image determine the Mark
preprocessing sheet partitioning answer
show results
e Image preprocessing: openCV will recognize four main anchor points to
determine the coordinates of this
e After determining the coordinates of four main anchor points, the image
will be transformed to crop the image bounded by those four coordinates
e This step will determine the area of the three main parts by the sub anchor
points They are the student code, the exam code, and the answers.
e Depending on the filled field on the answer sheet, the application uses
openCV to get the value of the student code, exam code, and student answer andsend it to Flutter to start marking
e Mark: Flutter will compare the value (json object) returned by openCV and
start marking
e Show results After marking successfully, the score and student code will be
shown on the screen.