1. Trang chủ
  2. » Luận Văn - Báo Cáo

teachable machine mini project sign language recognition system

15 0 0
Tài liệu được quét OCR, nội dung có thể không chính xác
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Sign Language Recognition System
Tác giả Nguyờn Thị Thanh An, Pham Thuy Thai An, Chau Ngoc Han, Do Khanh Linh, Chong Sinh Đụng
Người hướng dẫn Nguyen The Dai Nghia, Lecturer
Trường học University of Economics and Law
Chuyên ngành Information Systems
Thể loại Mini Project
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 15
Dung lượng 2,46 MB

Nội dung

Project description Building a sign language recognition system is an imnovative AI project aimed at facilitating communication between individuals with disabilities and those who wish t

Trang 1

UNIVERSITY OF ECONOMICS AND LAW

FACULTY OF INFORMATION SYSTEMS

Teachable Machine Mini Project

SIGN LANGUAGE RECOGNITION SYSTEM

Lecturer: Nguyen The Dai Nghia

Group: Group 8

Course ID: 225MI5211

Name

Ho Chi Minh City, July/2023

Trang 2

Table of Contents

PM 4

language and eventually interact with people in everyday life 00.000 eee

›- a e ố 7

Trang 3

Table of Figure

I 0000.10.10 1 14

Trang 4

I Project description

Building a sign language recognition system is an imnovative AI project aimed at facilitating communication between individuals with disabilities and those who wish to learn sign language This project is constructed using Teachable Machine, which allows developers to create machine learning models through a simple drag-and-drop interface The device has been trained on various commonly used hand sign languages of individuals with disabilities, and it can accurately identify sign language with high precision The sign language recognition machine utilizes a webcam to capture images of the hand signs used by individuals with hearing impairments to communicate Subsequently, the machine analyses the images and employs teachable machine technology to understand the intended message This information is then displayed on a screen, enabling individuals without hearing impairments to understand and communicate with those who have hearing disabilities

The lack of information and knowledge also contributes to a barrier between these two groups Individuals without hearing impairments often do not know how to effectively interact and communicate with individuals who are deaf or hard of hearing They may be unfamiliar with sign language or lack the ability to read and write captions

to convey information to individuals with hearing impairments This creates a communication gap and makes understanding each other challenging As an AI project, this sign language recognition system helps break down the language barrier and reduces the difficulties in communication between individuals without hearing impairments and those who are deaf or hard of hearing By employing Teachable Machine in sign language recognition, it represents a significant step towards creating equality and opportunities for individuals with hearing impairments to engage in communication and social interaction Simultaneously, it provides individuals without hearing mpairments

Trang 5

with a tool to understand and participate in the sign language community, contributing to building a more diverse and inclusive world

o4muOzuLu/

communicate with the hearing loss

Table 1-Overview of Sign Language Recognition System for the Deaf

1 Function

Gesture recognition can be used for disabled people Because handicapped people account for a large percentage of our community, we should make an effort to interact with them in order to exchange knowledge, perspectives, and ideas To that aim, we wish

to establish a means of contact Individuals who have hearing-impaired can communicate with one another using sign language A handicapped person can communicate without using acoustic noises when they use sign language

The objective of this project is to explain the design and development of a hand gesture-based sign language recognition system The solution is based on a web camera

4

Trang 6

as the major component, which is used to record a live stream video using a proprietary TensorFlow.js algorithm Recognition of hand movements is possible with the technology Recognizing hand gestures is a straightforward technique of providing a meaningful, highly flexible interaction between robots and their users There is no physical communication between the user and the devices A deep learning system that is efficient at picture recognition is used to locate the dynamically recorded hand movements Convolutional neural networks are used to optimize performance A static unage of a hand gesture is used to train the model Without relying on a pre-trained model, the CNN is constructed

Gesture recognition can be used to control devices or interfaces, such as a computer

or a smartphone, through movements or actions, such as hand or body movements, facial expressions or even voice commands

Why do many people want to use gestures instead of just touching or tapping a device? A desire for contactless sensing and hygiene concerns are the top drivers of demand for touchless technology Gesture recognition can also provide better ergonomics for consumer devices Another market driver is the rise of biometric systems in many areas of people’s lives, from cars to homes to shops

During the coronavirus pandemic, it’s not surprising that people are reluctant to use touchscreens in public places Moreover, for drivers, tapping a screen can be dangerous,

as it distracts them from the road In other cases, tapping small icons or accidentally clicking on the wrong field increases frustration and makes people look for a better customer experience Real-time hand gesture recognition for computer interactions is just the next step in technological evolution, and it’s ideally suited for today’s consumer landscape Besides using gestures when you cannot conveniently touch equipment, hand tracking can be applied in augmented and virtual reality environments, sign language recognition, gaming, and other use cases

Trang 7

2 Classes

a) Class 1: “Xin chao ban”: This class identifies greetings between communicators b) Class 2: “Rat vui duoc gap ban”: When “Nice to meet you” is actioned, the system will recognize

c) Class 3: “Xin 161”: When an apologize needs transmitting, this class will transfer action into words

d) Class 4: “Cam on”: This class stands for “Thank you”

e) Class 5: “Không thê nhận diện”: In case none of above classes are identified, this class will be shown

1 Enable multi-sensory impaired people to communicate with the computer without needing to rely on other people

People with deaf-blindness have both hearing and visual impairments Some dual sensory loss people have profound blindness and deafness, while others can use their hearing and vision to varying extents This condition can make it challenging for affected individuals to communicate with others Deaf-blind people can use a broad range of hearing and vision aids to communicate These devices improve quality of life by allowing people in the sensory impaired community to convey their needs and interact with computers

The deaf-blind person puts his or her hands opposite the camera of the device then starts some movements and location of the signs Some signs and facial expressions may need to be modified People can use one-handed or two-handed tactile sign language

2 People who desire to study sign language can benefit from the sign language recognition system

Sign language can be particularly useful for those working in public facing roles such

as police officers, paramedics, nurses, educators and social workers Learning sign

6

Trang 8

language could also enhance the ability to recognise and interpret body language Therefore, more and more people have the need to learn sign language by taking advantage of the sign language recognition system to make learning faster and more convenient

3 Hearing impairment individuals who use this sign language recognition technology can learn the language and eventually interact with people in everyday life

Hearing loss persons can utilize this system as a communication tool to help the deaf and the community for daily interaction In other words, this system is the bridge for communication between deaf and normal people It is defined as a mode of interaction for the hard of hearing people through a collection of hand gestures, postures, movements, and facial expressions or movements which correspond to letters and words

in our real life To communicate with deaf people, an interpreter is needed to translate real-world words and sentences

4, Supporting the impaired in many aspects of life

AI has the potential to greatly benefit people with disabilities in a number of ways in the future:

Assistive Technology: AI can be used to develop assistive technologies that can help people with disabilities to perform tasks that would otherwise be difficult or impossible for them For example, Al-powered devices like speech recognition software and smart home devices can help people with mobility or speech impairments to communicate and control their environment

Improved Accessibility: AI can be used to improve the accessibility of products and services for people with disabilities For example, AI can be used to develop audio descriptions for videos, making them more accessible to people who are visually unpaired

Trang 9

Enhanced Medical Care: AI can be used to improve medical diagnosis and treatment for people with disabilities For example, Al-powered devices can be used to monitor the health of people with chronic conditions and alert their care providers in the event of any changes or emergencies

Increased Employment Opportunities: AI can help people with disabilities to find employment and participate in the workforce For example, Al-powered tools can help to match people with disabilities with employers who are looking for their skills and abilities

5 The premise for the development of science and technology especially applies to AI for supporting the lives of handicapped people

Technical helpers are present in the every-day-life of a person with a disability and Artificial Intelligence (AI) is already helping in many ways AI based technology can adapt interfaces to the needs of the person sitting or standing in front of a screen An interface could then switch into a speech or a text-based mode applying different contrast and size of elements on the screen In that way, an AI based system could learn how to adapt and better present the content of applications in a personalized manner This would not only affect persons with learning or cognitive problems, but also a growing part of our aging society Automated customisation may help a blind person to adapt the system according to his or her needs, but the system then will know that there is a blind person in front of it AI in general could bring major improvements for the independent living of persons with disabilities in all parts of the world, not only in the industrialized countries, but in all parts of the world

1 Strengths

a) Accessibility: One of the significant strengths of your project is its focus on accessibility By providing a means of communication for individuals with hearing

Trang 10

b

Cc

d)

a

)

and speech impairments, the project empowers them to express themselves using sign language This inclusive approach enables those individuals to participate more fully in conversations and interactions, bridging communication barriers User-Friendly: Teachable Machine's user-friendly interface simplifies the process

of training machine learning models It eliminates the need for extensive programming knowledge, making it accessible to a broader range of users This ease of use enables individuals with little to no coding experience to create and deploy their own sign language recognition models quickly

Real-Time Recognition: The potential for real-time sign language recognition is a significant strength of your project With the nght setup, the system can accurately identify sign language gestures in real-time, allowing for immediate communication This real-time aspect is crucial in facilitating smooth and interactive conversations, enhancing the user experience

Customization: Teachable Machine's ability to train and customize the model specifically for the sign language gestures you want to recognize is a powerful feature This customization ensures that the model is tailored to the unique characteristics and nuances of the target sign language, resulting in improved accuracy and performance Users can focus on training the model for the specific gestures relevant to their sign language dialect or specific communication needs

2 Weaknesses

Limitations in Recognizing Continuous Actions: A notable weakness lies in the project's ability to recognize continuous actions within sign language phrases Sign language often involves sequences of interconnected or continuous gestures, such

as phrases that convey meaning holistically Training the model to differentiate between similar actions within different phrases can be challenging, potentially leading to confusion or inaccurate recognition

Trang 11

IV

b) Limited Dataset: The accuracy and reliability of the sign language recognition

Cc

d)

e

model heavily rely on the quality and diversity of the training dataset However, acquiring a diverse and comprehensive dataset of sign language samples can be challenging The limited availability of such datasets may result in reduced accuracy and the model's inability to recognize certain gestures or variations Environmental Constraints: Sign language recognition can be affected by various environmental factors such as lighting conditions, background clutter, and camera quality Suboptimal lighting conditions or noisy backgrounds may introduce errors or false detections, impacting the model's performance Ensuring consistent and optimal environmental conditions for capturing sign language gestures can be a challenge, particularly in real-world scenarios

Gesture Variability: Sign language can exhibit significant variations among individuals, regions, and even different cultures Different individuals may perform the same gesture slightly differently, and variations in sign language dialects can further complicate recognition Capturing the full range of these variations within a single model can be difficult, potentially leading to reduced accuracy for certain users or contexts

Hardware Limitations: The effectiveness of your project may be influenced by the hardware limitations of the devices used for capturing and processing sign language gestures Low-resolution cameras or limited processing power may impact the model's ability to accurately recognize complex or subtle sign language gestures Ensuring access to adequate hardware resources is essential for optimal performance

Future development

There are numerous restrictions because the method is built on machine learning, as was already indicated In order to make the system more comprehensive and useful, there are numerous growth directions The recognition system will eventually be coupled with

10

Ngày đăng: 26/07/2024, 19:09