1. Trang chủ
  2. » Luận Văn - Báo Cáo

Nhận dạng các tình huống khó ứng dụng trong trợ giúp người khiếm thị sử dụng kinect di động

76 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 18,7 MB

Nội dung

HOANG VAN NAM MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY - Hoang Van Nam COMPUTER SCIENCE DIFFICULT SITUATIONS RECOGNITION SYSTEM FOR VISUALLY-IMPAIRED AID USING A MOBILE KINECT MASTER THESIS OF SCIENCE COMPUTER SCIENCE 2014B Ha Noi – 2016 123doc Mang Ln thay h■■ng l■im■i s■ cam tr■ h■u m■t k■t nghi■m t■im■t s■ cáwebsite nhân mang kho m■ith■ kinh m■ l■i d■n vi■n nh■ng cho doanh ■■u kh■ng ng■■i quy■n chia t■ th■c dùng, l■ s■l■i v■i hi■n t■t công h■n mua ngh■a nh■t 2.000.000 ngh■ báncho tài v■ hi■n ng■■i li■u c■a tài th■ hàng li■u dùng hi■n ■■u ■ thìt■t Khi ■■i, s■p Vi■t c■ khách b■n t■i, l■nh Nam ngh■a online hàng v■c: Táctr■ khơng v■ tài phong thành c■a khác chun c■a thành tíngì d■ng, hàng so nghi■p, viên v■i tri■u cơng c■a b■n hồn nhà ngh■ 123doc g■c bán h■o, thơng B■n hàng ■■ n■p có tin, l■i cao th■ ti■n ngo■i chuy■n tính phóng vào ng■, Khách trách tài giao to,kho■n nhi■m thu sang nh■ c■a ■■i ■■n hàng tùy123doc, v■i v■ ý cót■ng qu■n th■b■n d■ ng■■i lýChào dàng s■ dùng ■■■c m■ng tra c■u M■c h■■ng b■n tàitiêu li■u ■■n nh■ng hàng m■t v■i■■u quy■n cách 123doc c■a l■i123doc.net sau xác,n■p nhanh ti■n tr■ chóng thành website th■ vi■n tài li■u online l■n nh■t Vi■t Nam, cung c■p nh■ng tài li■u ■■c khơng th■ tìm th■y th■ tr■■ng ngo■i tr■ 123doc.net Nhi■u event thú v■, event ki■m ti■n thi■t th■c 123doc luôn t■o c■ h■i gia t■ng thu nh■p online cho t■t c■ thành viên c■a website Mangh■n Luôn Th■a Xu■t Sau Nhi■u 123doc Link h■■ng phát thu■n l■i event cam s■ nh■n xác m■t tr■ t■ h■u k■t s■ thú nghi■m t■i th■c ýxác n■m t■■ng m■t d■ng v■, s■ nh■n s■ website mang event kho m■i ■■■c ■■i, t■o tLink t■ th■ m■ l■i c■ng ki■m ■■ng d■n 123doc CH■P g■i vi■n xác nh■ng cho ■■u ■■ng ti■n v■ th■c h■ kh■ng ng■■i NH■N ■ã ■■a quy■n th■ng thi■t chia t■ng s■ ki■m dùng, l■ ch■ CÁC s■ ■■■c th■c s■ l■i b■■c v■i ti■n email chuy■n ■I■U t■t công h■n mua 123doc g■i online kh■ng nh■t b■n 2.000.000 v■ ngh■ bán KHO■N sang b■ng cho ■■a ■ã tài ■■nh hi■n ■■ng ng■■i li■u ph■n ch■ tài TH■A tài v■ th■ li■u hàng t■o email li■u thông ky, dùng tríhi■n THU■N hi■u c■ c■a b■n ■■u ■b■n tin t■t h■i Khi ■■i, qu■ vui Vi■t xác c■ ■ã khách gia lòng b■n nh■t, minh l■nh ■■ng Nam t■ng Chào ■■ng online hàng uy tài v■c: l■nh thu Tác m■ng ky, tín kho■n tr■ nh■p nh■p khơng b■n tài phong v■c cao thành b■n vui email nh■t tài email online oLink khác chuyên ■■n li■u lòng thành tínb■n Mong c■a xác cho d■ng, ■■ng v■i so nghi■p, viên th■c kinh ■ã t■t 123doc 123doc.net! v■i mu■n công ■■ng nh■p c■a c■ doanh s■ b■n vàcác hoàn mang ■■■c ngh■ 123doc click email ký g■c online thành v■i h■o, Chúng vào l■i thông B■n g■i c■a 123doc.netLink CH■P cho viên linkí Tính ■■ v■ n■p có tơi tin, c■ng c■a cao ■■a th■ ■■n cung NH■N ti■n ngo■i tính website phóng ■■ng ch■ th■i click vào c■p CÁC ng■, Khách trách xác email tài ■i■m D■ch vào xã to,kho■n ■I■U th■c nhi■m h■i thu linkông l■nh b■n tháng V■ nh■ m■t s■ KHO■N c■a ■ã v■c (nh■ ■■i hàng ■■■c tin tùy ngu■n 5/2014; ■■ng 123doc, tài v■i xác ■■■c ý có li■u TH■A g■i t■ng minh th■ tài ky, 123doc v■ mô nguyên b■n b■n d■ ng■■i THU■N tài kinh ■■a t■ dàng kho■n s■ vui v■■t d■■i doanh tri dùng ■■■c ch■ lòng tra th■c m■c email ■ây) email c■u ■■ng Chào online M■c h■■ng quý 100.000 cho tài b■n b■n m■ng tiêu báu, nh■p li■u Tính b■n, ■ã nh■ng ■ã hàng phong m■t l■■t ■■n email ■■ng b■n tùy ■■ng ■■u quy■n cách truy thu■c ■■n th■i phú, c■a ký ky, c■a c■p v■i ■i■m v■i ■a l■i b■n vào 123doc.net m■i 123doc.netLink d■ng, 123doc.net! sau xác, vui tháng vàngày, n■p click lòng “■i■u nhanh giàu 5/2014; ti■n s■ vào ■■ng tr■ giá Kho■n Chúng chóng h■u linkc■a thành tr■ xác 123doc nh■p 2.000.000 website ■■ng th■c Th■a th■ website cung email v■■t s■ vi■n th■i Thu■n ■■■c c■p c■a thành mong m■c tài D■ch v■ li■u g■i viên 100.000 mu■n S■ online v■ V■ ■■ng D■ng click ■■a t■o (nh■ l■■t l■n ký, D■ch ■i■u vào ch■ nh■t ■■■c truy l■t link email ki■n V■” vào c■p Vi■t 123doc môtop sau cho b■n m■i Nam, t■200 ■ây d■■i cho ngày, ■ã cung các (sau ■■ng g■i ■ây) s■ website c■p users ■ây h■u ky, cho nh■ng ■■■c có b■n 2.000.000 b■n, ph■ thêm vui tài bi■n tùy g■i lòng thu li■u thu■c t■t thành nh■t nh■p ■■c ■■ng T■i vào t■i viên khơng t■ng Chính nh■p Vi■t ■■ng th■i “■i■u th■ Nam, email v■y ■i■m, ký, tìm t■ Kho■n c■a l■t 123doc.net th■y l■chúng vào tìm Th■a top ki■m tơi th■ 200 click Thu■n cóthu■c ■■i tr■■ng th■ vào nh■m website c■p v■ top link ngo■i S■ 3nh■t ■áp 123doc Google D■ng ph■ tr■ ■KTTSDDV ■ng 123doc.net bi■n ■ã D■ch Nh■n nhu g■i nh■t c■u V■” ■■■c theo t■i chia sau Vi■t quy■t danh ■ây s■ Nam, tài (sau hi■u li■u t■ ■ây ch■t l■c■ng ■■■c tìm l■■ng ki■m ■■ng g■i thu■c t■t bình ki■m T■i ch■n top ti■n t■ng Google online th■i website ■i■m, Nh■n ki■m chúng ■■■c ti■ntơi online danh có th■ hi■u hi■u c■p qu■ nh■t c■ng ■KTTSDDV uy ■■ng tín nh■t bình ch■n theo quy■t website ki■m ti■n online hi■u qu■ uy tín nh■t Lnh■n 123doc Sau Th■a Xu■t h■■ng phát thu■n cam nh■n m■t t■k■t s■ t■i ýxác n■m t■■ng d■ng s■ nh■n website mang ■■i, t■o t■l■i c■ng ■■ng d■n 123doc CH■P nh■ng ■■u ■■ng h■ NH■N ■ã quy■n th■ng chia t■ng ki■m CÁC s■s■ l■i b■■c ti■n vàchuy■n ■I■U t■t mua online kh■ng nh■t bán KHO■N sang b■ng cho ■■nh thay ng■■i ph■n tài TH■A vìv■ li■u m■i thơng dùng tríTHU■N hi■u m■t c■atin Khi qu■ cá xác khách nhân nh■t, minh Chào kinh hàng uy tài l■nh m■ng doanh tín kho■n tr■ v■c cao thành b■n t■ email nh■t tàith■c ■■n li■u thành b■n Mong hi■n v■i viên kinh ■ã 123doc 123doc.net! mu■n ngh■a ■■ng c■a doanh mang 123doc v■ kýonline c■a v■i Chúng l■ivà 123doc.netLink cho Tính n■p tơi c■ng ■■n cung ti■n s■p ■■ng th■i vào c■p t■i, xác tài ■i■m D■ch xã ngh■a kho■n th■c h■itháng V■ m■t s■ v■ c■a (nh■ ■■■c c■a ngu■n 5/2014; 123doc, ■■■c c■a g■i tài 123doc hàng v■ mô nguyên b■n■■a t■ tri■u s■ v■■t d■■i tri ■■■c ch■ nhà th■c m■c ■ây) email bán h■■ng quý 100.000 cho hàng b■n báu, b■n, nh■ng l■i ■ã phong l■■t chuy■n tùy ■■ng quy■n truy thu■c phú, ky, c■p giao ■a l■i b■n vào m■i sang d■ng, sau vuingày, n■p ■■n lòng “■i■u giàu ti■n s■ ■■ng v■ giá Kho■n h■u qu■n tr■ nh■p 2.000.000 website ■■ng Th■a lý hoàn email th■i Thu■n h■o, c■a thành mong v■ ■■ viên mu■n S■ cao ■■ng D■ng tính click t■otrách ký, D■ch ■i■u vàol■t link nhi■m ki■n V■” vào 123doc top sau cho ■■i 200 ■ây cho v■i ■ãcác (sau g■i t■ng website users ■ây ng■■i ■■■c cóph■ dùng thêm bi■n g■i thu M■c t■t nh■t nh■p T■i tiêu t■i t■ng hàng Chính Vi■tth■i ■■u Nam, v■y ■i■m, c■a t■123doc.net l■ 123doc.net chúng tìm ki■m tơiracó tr■ thu■c ■■i th■ thành nh■m c■p topth■ 3nh■t ■áp Google vi■n ■KTTSDDV ■ng tàiNh■n nhu li■uc■u online ■■■c theo chia l■n quy■t danh s■nh■t tài hi■u li■u Vi■t ch■t Nam, c■ng l■■ng cung ■■ng c■p bình ki■m nh■ng ch■n ti■ntài online website li■u ■■cki■m khơng ti■n th■ online tìm th■y hi■utrên qu■th■ tr■■ng uy tín nh■t ngo■i tr■ 123doc.net Ln Th■a Xu■t Sau Nhi■u 123doc Mang thayh■n h■■ng phát thu■n l■i event m■i cam s■ nh■n m■t tr■ t■ h■u m■t k■t s■ thú nghi■m t■i ýxác n■m t■■ng m■t d■ng v■, s■ cá nh■n website nhân mang event kho m■i ■■i, t■o t■ th■ kinh m■ l■i c■ng ki■m ■■ng d■n 123doc CH■P vi■n nh■ng cho doanh ■■u ■■ng ti■n h■ kh■ng ng■■i NH■N ■ã quy■n th■ng thi■t chia t■t■ng ki■m th■c dùng, l■ CÁC s■ th■c s■ l■i b■■c v■i ti■n hi■n chuy■n ■I■U t■t công h■n mua 123doc online kh■ng ngh■a nh■t 2.000.000 ngh■ bán KHO■N sang b■ng cho tài ■■nh v■ hi■n ng■■i li■u ph■n c■a tài TH■A tài v■ th■ li■u hàng t■o li■u thơng dùng tríhi■n THU■N hi■u c■ c■a ■■u ■ thìtin t■t h■i Khi ■■i, qu■ s■p Vi■t xác c■ khách gia b■n t■i, nh■t, minh l■nh Nam t■ng Chào ngh■a online hàng uy tài v■c: l■nh thu Tác m■ng tín kho■n tr■ nh■p khơng v■ tài phong v■c cao thành b■n c■a email nh■t tài online khác chuyên ■■n c■a li■u thành tínb■n Mong cho d■ng, hàng v■i so nghi■p, viên kinh ■ã t■t 123doc 123doc.net! v■i mu■n tri■u cơng ■■ng c■a c■ doanh b■n hồn nhà mang ngh■ 123doc ký g■c online thành bán v■i h■o, Chúng l■i thơng B■n hàng 123doc.netLink cho viên Tính ■■ n■p có tơi tin, c■ng l■i c■a cao th■ ■■n cung ti■n ngo■i chuy■n tính website phóng ■■ng th■i vào c■p ng■, Khách trách xác tài ■i■m D■ch giao xã to,kho■n th■c nhi■m h■i thu sang tháng V■ nh■ m■t s■ c■a (nh■ ■■i ■■n hàng ■■■c tùy ngu■n 5/2014; 123doc, v■i v■ ■■■c ý cóg■i t■ng qu■n th■ tài 123doc v■ mô nguyên b■n d■ ng■■i lý, ■■a t■ dàng s■ công v■■t d■■i tri dùng ■■■c ch■ tra th■c ngh■ m■c ■ây) email c■u M■c h■■ng quý hi■n 100.000 cho tài b■n tiêu báu, li■u b■n, th■ nh■ng ■ã hàng phong m■t l■■t hi■n tùy ■■ng ■■u quy■n cách truy thu■c ■■i, phú, ky, c■a c■p ■a b■n l■i b■n vào 123doc.net m■i d■ng, sau online xác, vuingày, n■p lịng “■i■u nhanh giàu khơng ti■n s■ ■■ng tr■ giá Kho■n chóng h■u khác thành tr■ nh■p 2.000.000 website ■■ng Th■a gìth■ so email vi■n th■i v■i Thu■n c■a thành b■n mong tài v■ li■u g■c viên mu■n S■ online B■n ■■ng D■ng click t■o l■n cóký, D■ch ■i■u vào th■ nh■t l■t link phóng ki■n V■” vào Vi■t 123doc top sau cho to, Nam, 200 thu ■ây cho ■ã cung nh■ các (sau g■iwebsite tùy c■p users ■ây ý.nh■ng ■■■c cóph■ thêm tài bi■n g■i thu li■u t■t nh■t nh■p ■■c T■it■i khơng t■ng Chính Vi■tth■i th■ Nam, v■y ■i■m, tìm t■123doc.net th■y l■chúng tìm ki■m tơi th■ racóthu■c ■■i tr■■ng th■nh■m c■p top ngo■i 3nh■t ■áp Google tr■ ■KTTSDDV ■ng 123doc.net Nh■n nhu c■u ■■■c theo chiaquy■t danh s■ tài hi■u li■udo ch■t c■ng l■■ng ■■ng vàbình ki■mch■n ti■n online website ki■m ti■n online hi■u qu■ uy tín nh■t Chia m■t u■t Nhi■u Mang Ln 123doc Th■a Xu■t Sau tri■n phát h■n member s■ h■■ng phát khai thu■n l■i event s■ cam nh■n câu t■ m■t tr■ t■ event h■u ýk■t s■ chuy■n thú nghi■m t■i ýkhông t■■ng xác n■m t■■ng m■t d■ng v■, khuy■n s■ nh■n website mang m■y event t■o kho thành m■i ■■i, t■o t■ c■ng th■ n■i m■ l■i c■ng ki■m ■■ng d■n công 123doc CH■P th■ vi■n b■t nh■ng cho ■■ng ■■u ■■ng ti■n n■p h■ c■a kh■ng ng■■i NH■N ■ã quy■n th■ng 123doc thi■t chia ki■m v■i c■ng t■ng ki■m dùng, l■ CÁC s■ nh■ng th■c ti■n s■ l■i b■■c ■■ng v■i ti■n -và ki■m chuy■n ■I■U t■t công online h■n mua 123doc online ■u kh■ng 123doc nh■t 5■ãi 2.000.000 ngh■ bán KHO■N tri■u b■ng sang b■ng cho c■c tài ■■nh ■ã hi■n ch■ tài ng■■i li■u ph■n k■ tài TH■A xu■t li■u tài v■ v■i th■ li■u h■p hàng t■o li■u thông s■c dùng trí hi■u 7hi■n THU■N hi■u d■n tài c■ c■a ■■u ■■■ng li■u! tin qu■ t■t h■i Khi ■■i, qu■ ■■ng Vi■t xác c■ khách gia nh■t, Nghe b■n nh■t, minh l■nh Nam t■ng Chào b■online có uy hàng danh l■ uy tài v■c: l■nh thu Tác v■ tín m■ng nhé, tín kho■n tr■ sách cao nh■p khó khơng tài phong v■c cao tr■■c thành b■n nh■t tin Top email nh■t tài online khác nh■ng chuyên ■■n li■u tiên thành danh tín Mong b■n Mong cho d■ng, v■i ■ây so thu nghi■p, viên kinh ■ã mu■n t■t 123doc 123doc.net! v■i mu■n cao công ■■ng c■a c■ doanh b■n nh■t mang tìm hồn mang ngh■ 123doc s■ ký g■c hi■u online thành tháng v■i l■i hoàn h■o, Chúng l■i thơng B■n thơng cho 123doc.netLink cho viên t■o tồn Tính ■■ n■p có c■ng tơi tin, c■ng tin c■ c■a cao th■ ■■n cung ti■n ngo■i v■ h■i ■■ng tính website phóng ■■ng Khách th■i vào c■p xác gia ng■, Khách trách xác xã tài t■ng ■i■m mà D■ch xã to, hàng h■i kho■n th■c nhi■m h■i BQT thu thu m■t tháng V■ có nh■ m■t s■ nh■p 123doc c■a th■ (nh■ ■■i hàng ngu■n ■■■c tùy ngu■n 5/2014; 123doc, d■ v■i online ■■■c ý có ■ã dàng tài g■i t■ng th■ tài thu 123doc nguyên cho v■ mô nguyên b■n tra d■ ng■■i th■p t■t ■■a t■ c■u dàng s■ v■■t tri d■■i c■ ■■■c tri dùng ■■■c ch■ tài th■c tra th■c m■c li■u ■ây) email c■u sau thành quý M■c h■■ng quý m■t 100.000 cho ■■t tài báu, b■n tiêu báu, viên li■u cách b■n, t■ng nh■ng phong ■ã hàng phong c■a m■t l■■t tùy ■■ng k■t ■■u website phú, quy■n cách truy thu■c phú, doanh xác, ky, c■a c■p ■a ■a nhanh l■i b■n vào d■ng, thu 123doc.net m■i d■ng, sau xác, vui tháng chóng ngày, n■p giàu lịng “■i■u nhanh giàu 11 ti■n giá s■ ■■ng tr■ giá uy Kho■n chóng h■u tr■ tín thành tr■ nh■p ■■ng cao 2.000.000 website ■■ng Th■a th■ nh■t email th■i vi■n th■i Thu■n Mong mong c■a thành mong tài v■ li■u mu■n mu■n viên mu■n S■ online ■■ng D■ng mang t■o click t■o l■n ■i■u ký, D■ch ■i■u vào l■i nh■t l■t cho link ki■n ki■n V■” vào Vi■t c■ng 123doc cho top sau cho Nam, ■■ng cho 200 ■ây cho ■ã cung các (sau g■i xãusers website h■i c■p users ■ây m■t nh■ng có ■■■c cóph■ thêm ngu■n thêm tài bi■n g■i thu thu li■u tài t■t nh■p nh■t nh■p ngun ■■c T■it■i Chính khơng t■ng Chính Vi■t tri th■c th■i vìth■ Nam, vìv■y v■y q ■i■m, tìm 123doc.net t■123doc.net báu, th■y l■chúng tìm phong ki■m tơi th■ phú, có ■■i thu■c ■■i tr■■ng th■ ■Sau nh■m nh■m c■p top ngo■i h■n ■áp 3nh■t ■áp Google m■t ■ng tr■ ■KTTSDDV ■ng 123doc.net n■m nhu Nh■n nhuc■u rac■u ■■i, ■■■c chia theo chia 123doc s■ quy■t danh s■tàitài hi■u li■u ■ã li■u t■ng ch■t ch■t c■ng b■■c l■■ng l■■ng ■■ng kh■ng vàvàki■m bình ki■m ■■nh ch■n ti■n ti■n v■ online online tríwebsite c■a ki■m ti■nl■nh online v■c hi■u tài li■u qu■và vàkinh uy tín doanh nh■t.online Nhi■u Mang Ln 123doc Th■a Xu■t Sau h■n h■■ng phát thu■n l■i event s■ cam nh■n m■t tr■ t■ h■u k■t s■ thú nghi■m t■i ýxác n■m t■■ng m■t d■ng v■, s■ nh■n website mang event kho m■i ■■i, t■o t■ th■ m■ l■i c■ng ki■m ■■ng d■n 123doc CH■P vi■n nh■ng cho ■■u ■■ng ti■n h■ kh■ng ng■■i NH■N ■ã quy■n th■ng thi■t chia t■ng ki■m dùng, l■ CÁC s■ th■c s■ l■i b■■c v■i ti■n vàchuy■n ■I■U t■t công h■n mua 123doc online kh■ng nh■t 2.000.000 ngh■ bán KHO■N sang b■ng cho tài ■■nh hi■n ng■■i li■u ph■n tài TH■A tài v■ th■ li■u hàng t■o li■u thơng dùng tríhi■n THU■N hi■u c■ c■a ■■u ■ tin t■t h■i Khi ■■i, qu■ Vi■t xác c■ khách gia b■n nh■t, minh l■nh Nam t■ng Chào online hàng uy tài v■c: l■nh thu Tác m■ng tín kho■n tr■ nh■p không tài phong v■c cao thành b■n email nh■t tài online khác chuyên ■■n li■u thành tínb■n Mong cho d■ng, v■i so nghi■p, viên kinh ■ã t■t 123doc 123doc.net! v■i mu■n công ■■ng c■a c■ doanh b■n hoàn mang ngh■ 123doc ký g■c online thành v■i h■o, Chúng l■i thông B■n 123doc.netLink cho viên Tính ■■ n■p có tơi tin, c■ng c■a cao th■ ■■n cung ti■n ngo■i tính website phóng ■■ng th■i c■p thay ng■, Khách trách xác ■i■m D■ch xã to, th■c nhi■m m■i h■i thutháng V■ nh■ m■t s■(nh■ ■■i hàng ■■■c tùy ngu■n 5/2014; cáv■i nhân ■■■c ý cóg■i t■ng th■ tài 123doc kinh v■ mơ ngun d■ ng■■i doanh ■■a t■ dàng v■■t d■■i tri dùng ch■ t■ tra th■c m■c ■ây) th■c email c■u M■c quý 100.000 cho tài hi■n b■n tiêu báu, li■u b■n, ngh■a ■ã hàng phong m■t l■■t tùy ■■ng ■■u cách truy v■ thu■c phú, ky, c■a c■a c■p ■a b■n vào 123doc.net m■i d■ng, xác, vuingày, lòng “■i■u nhanh giàu s■p s■ ■■ng tr■ giá t■i, Kho■n chóng h■u thành tr■ ngh■a nh■p 2.000.000 ■■ng Th■a th■ email v■vi■n th■i Thu■n c■a c■a thành mong tài c■a v■ li■u viên hàng mu■n S■ online ■■ng D■ng tri■u click t■o l■n ký, D■ch ■i■u vào nhà nh■t l■t link bán ki■n V■” vào Vi■t 123doc hàng top sau cho Nam, 200 l■i ■ây cho ■ã chuy■n cung các (sau g■iwebsite c■p users ■ây giao nh■ng ■■■c cósang ph■ thêm tài bi■n g■i ■■n thu li■u t■t nh■t v■ nh■p ■■c T■i qu■n t■i không t■ng Chính Vi■t lý th■i quy■n th■ Nam, v■y ■i■m, tìm l■i t■123doc.net th■y l■ sau chúng tìm n■p ki■m tơi th■ ti■n racóthu■c ■■i tr■■ng th■nh■m c■p website top ngo■i 3nh■t ■áp Google tr■ ■KTTSDDV ■ng 123doc.net Nh■n nhu c■u ■■■c theo chiaquy■t danh s■ tài hi■u li■udo ch■t c■ng l■■ng ■■ng vàbình ki■mch■n ti■n online website ki■m ti■n online hi■u qu■ uy tín nh■t luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep MINISTRY OF EDUCATION AND TRAINING HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY Hoang Van Nam DIFFICULT SITUATIONS RECOGNITION SYSTEM FOR VISUALLY-IMPAIRED AID USING A MOBILE KINECT Department : COMPUTER SCIENCE MASTER THESIS OF SCIENCE … SUPERVISOR : Dr Le Thi Lan Ha Noi – 2016 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep CỘNG HÒA XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập – Tự – Hạnh phúc BẢN XÁC NHẬN CHỈNH SỬA LUẬN VĂN THẠC SĨ Họ tên tác giả luận văn : ………………………………… …………… Đề tài luận văn: ………………………………………… …………… .… Chuyên ngành:…………………………… ………………… … Mã số SV:………………………………… ………………… … Tác giả, Người hướng dẫn khoa học Hội đồng chấm luận văn xác nhận tác giả sửa chữa, bổ sung luận văn theo biên họp Hội đồng ngày… .………… với nội dung sau: …………………………………………………………………………………………………… ………… ………………………………………………………………………………………… ………………………… ………………………………………………………………………… ………………………………………… ………………………………………………………… ………………………………………………………… ………………………………………… ………………………………………………………………………… ………………………… …………………………………………………………………………………… Ngày Giáo viên hướng dẫn tháng năm Tác giả luận văn CHỦ TỊCH HỘI ĐỒNG luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Declaration of Authorship I, Hoang Van Nam, declare that this thesis titled, ’Difficult situations recognition for visual-impaired aid using mobile Kinect’ and the work presented in it are my own I confirm that:  This work was done wholly or mainly while in candidature for a research degree at this University  Where any part of this thesis has previously been submitted for a degree or any other qualification at this University or any other institution, this has been clearly stated  Where I have consulted the published work of others, this is always clearly attributed  Where I have quoted from the work of others, the source is always given With the exception of such quotations, this thesis is entirely my own work  I have acknowledged all main sources of help  Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself Signed: Date: i luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY Abstract International Research Institute MICA Computer Vision Department Master of Science Difficult situations recognition for visual-impaired aid using mobile Kinect by Hoang Van Nam By 2014, according to figures from some organization, here are more than one million people in the Vietnam living with sight loss, about 1.3% of Vietnamese people Although the big impact to the daily living, especially with the ability to move, read, communicate with another, only a small percentage of blind or visually impaired people live with assistive device or animal such as a dog guide Motivated by the significant changes in technology have take place in the last decade, especially in the introduction of varies types of sensors as well as the development in the field of computer vision, I present in this thesis a difficult situations recognition system for visually impaired aid using a mobile Kinect This system is based on data captured from Kinect and using computer vision technique to detect obstacle At the current prototype, I only focused on detecting obstacle in the indoor environment like public building and two types of obstacle will be exploited: general obstacle in the moving way and staircases-which causes a big dangerous to the visually impaired people The 3D imaging techniques were used to detect the general obstacle including: plane segmentation, 3D point clustering and the mixed strategy between depth and color image is used to detect the staircase based on detecting the stair edges and its structure The system is very reliable with the detection rate is about 82.9% and the time to process each frame is 493 ms luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Acknowledgements I am so honor to be here the second time, in one of the finest university in Vietnam to write those grateful words to people who have been supporting, guiding me from the very first moment when I was a university student until now, when I am writing my master thesis I am grateful to my supervisor, Dr Le Thi Lan, whose expertise, understanding, generous guidance and support made it possible for me to work on a topic that was of great interest to me It was a pleasure to work with her Special thanks to Dr Tran Thi Thanh Hai, Dr Vu Hai and Dr Nguyen Thi Thuy (VNUA) and all of the members in the Computer Vision Department, MICA Institute for their sharp comments, guidance for my works which helps me a lot in how to study and to research in right way and also the valuable advices and encouragements that they gave to me during my thesis I would like to express my gratitude to Prof Veelaert Peter, Dr Luong Quang Hiep and Mr Michiel Vlaminck at Ghent University, Belgium for their supporting It’s been a great honor to cooperate and work with them Finally, I would especially like to thank my family and friends for their continues love, support they have given me through my life, helps me pass through all the frustrating, struggling, confusing Thanks for everything that helped me get to this day Hanoi, 19/02/2016 Hoang Van Nam iii luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Contents Declaration of Authorship i Abstract ii Acknowledgements iii Contents iv List of Figures vi List of Tables ix Abbreviations x Introduction 1.1 Motivation 1.2 Definition 1.2.1 Assistive systems for visually impaired 1.2.2 Difficult situations 1.2.3 Mobile Kinect 1.2.4 Environment Context 1.3 Difficult Situations Recognition System 1.4 Thesis Contributions people 1 2 11 12 13 Related Works 14 2.1 Assistive systems for visually impaired people 14 2.2 RGB-D based assistive systems for visually impaired people 18 2.3 Stair Detection 19 Obstacle Detection 3.1 Overview 3.2 Data Acquisition 3.3 Point Cloud Registration 3.4 Plane Segmentation 3.5 Ground & Wall Plane Detection 25 25 26 27 30 32 iv luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Contents 3.6 3.7 32 34 34 35 45 46 48 Experiments 4.1 Dataset 4.2 Difficult situation recognition evaluation 4.2.1 Obstacle detection evaluation 4.2.2 Stair detection evaluation 49 49 51 51 53 3.8 Obstacle Detection Stair Detection 3.7.1 Stair definition 3.7.2 Color-based stair detection 3.7.3 Depth-based stair detection 3.7.4 Result fusion Obstacle information representation v Conclusions and Future Works 58 5.1 Conclusions 58 5.2 Future Works 59 Publications 60 Bibliography 61 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep List of Figures 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 2.1 2.2 2.3 2.4 2.5 2.6 A Comprehensive Assistive Technology (CAT) Model provided by [12] A model for activities attribute and mobility provided by [12] Distribution of frequencies of head-level accidents for blind people [18] Distribution of frequencies of tripping resulting a fall [18] A typical example of depth image (A) raw depth image, (B) depth image is visualized by jet color map and the colorbar shows the real distance with each color value, (C) Reconstructed 3D scene A stereo images that taken from OpenCV library and the calculated depth image (A) left image, (B) right image, (C) depth image (disparity map) Some existed stereo camera From left to right: Kodak stereo camera, View-Master Personal stereo camera, ZED, Duo 3D Sensor Time of flight systems from [3] Some ToF cameras From left to right: DepthSense, Fotonic, Microsoft Kinect v2 Structured light cameras From left to right: PrimeSense, Microsoft Kinect v1 Structured light systems from [3] Figure from [16], (A) raw IR image with pattern, (B) depth image Figure from [16] (A) Errors for structured light cameras, (B) Quantization errors in different distances of a door: 1m, 3m, 5m Prototype of system using mobile Kinect, (A) Kinect with battery and belt, (B) Backpack with laptop (C)Mobile Kinect is mounted on human body Two different environments that I tested with (A) Our office build (B) Nguyen Dinh Chieu secondary school Prototype of our obstacle detection and warning system Robot-Assisted Navigation from [17] (A) RFID tag, (B) Robot (C) Navigation NXT Robot System from [6] (A) The system’s Block Diagram, (B) NXT Robot Mobile robot from [22] [21] BrainPort vision substitution device [32] Obstacle detection process from [30] Stair detection from [26] (A) Input image (B)(C)Frequency as a output of Gabor filter (D)Stair detection result 4 9 10 11 12 13 15 16 16 18 20 21 vi luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep List of Figures A near-approach for stair detection in [13] (A) Input image with detected stair region, (B) Texture energy, (C)Input image with detected lines are stair candidates, (D)Optical flow maps in this image, there is a significant changing in the line in the edge of stair 2.8 Example of segmentation and classification in [24] 2.9 Stair modeling(left) and features in each plane [24] 2.10 Stair detection algorithm proposed in [29] (A) Detected line in the edge image (using color infomation) (B) Depth profiles in each line (red line: pedestrian crosswalk, blue: down stair, green: upstair) vii 2.7 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 Obstacle Detection Flowchart Kinect mounted on body Coordinate Transformation Process Kinect Coordinate Point Cloud rotation using normal vector of ground plane (while arrow): left: before rotating, right: after rotating Normal vector estimation algorithms [15] (a) Normal vector of the center point can be calculated by a cross product of two vectors of four neighbor points (red), (b) Normal vector estimation in a scene Plane segmentation result using algorithm proposed in [15] Each plane is represented by a distinctive color Detected Ground and Walls plane (ground: blue, wall: red) Human Segmentation Data by Microsoft Kinect SDK (a) Color Image, (b) Human Mask Detected Obstacles (a) Color Image, (b) Detected Obstacles Model of stair Coordinate transformation models from [7] Projective chirping: a) A real world object that generate a projection with ”chirping” - ”periodicity-in-perspective” b) Center raster of image c) Best fit projective chirp A pin-hole camera model with stair A vertical Gabor filter kernel Gabor filter applied on a color image (a) Original (b) Filtered Image Thresholding the grayscale image (a) Original (b) Thresholded Image Example of thinning image using morphological Thresholding the grayscale image (a) Original (b) Thresholded Image Six points vote for a line will make an intersection in Hough space, this intersection has higher intensity than neighbor pixels Hough space (a) Line in the original space (b) Three curves vote for this line in Hough space Hough space on stair image (a) Original image (b) Hough space Chirp pattern detection (a) Hough space (b) Original image with detected chirp pattern Point cloud of stair (a) Original color image (b)Point cloud data created from color and depth image Detected steps Detected planes Detected stair on point cloud 22 23 23 24 26 27 28 29 30 31 31 33 34 34 35 36 38 38 39 40 40 41 42 42 43 43 44 45 46 47 47 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 4.1 Dataset In this section, I will present in detail about how to collect a database in the real environment in order to develop/evaluate the system and the characterize of the dataset The mobile Kinect and backpack have been mounted on the visually impaired body, which is presented in the Chapter To record a dataset, I wrote another program for getting data from Kinect and save it to computer as a video file By default, Kinect provides several types of data, but in my work, I only collected depth image, color image, and accelerometer data Those data will be organized as follow: • Depth data: Because Kinect returns depth data as 16-bit single channel image, so we cannot write this data as a video stream in the normal way To deal with this problem, I encoded depth image as a 8-bit three channels image (or RGB image) where blue channel represents the MSB (the most significant bit/high-order bit) and g channel represents LSB bit (the least significant bit/low-order bit) and saves it as uncompressed video This is very important thing because the proposed method is one type of image encoding if we save the video with some encoder like H.264, MJPG, we’ll loose our encoding and the decoded depth image will have wrong values Fig 4.1 illustrates depth image after encoding • Color data: This kind of data can be written as a normal video • Accelerometer data: This data can be written to a text file where each line represents information of ground’s normal vector (3 dimensions) in Kinect coordinate (with meter unit) of each frames in the video 49 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 50 (a) (b) (c) Figure 4.1: Depth image encoding (A) Original, (B) Visualized Image (C) Encoded Image Table 4.1: Database specifications Average video length minutes Lighting condition low lighting minutes high (sunny day) Place Number of video MICA building Nguyen Dinh Chieu school Total size 14 GB 34 GB In my work, I have collected dataset in two places: our office building and Nguyen Dinh Chieu secondary school for blind pupils In each environment, a blind person (in Nguyen Dinh Chieu school) or normal person (in MICA building) was asked to walking along the lobby of this building In MICA building, I prepared some static obstacles like trashbins, flower pot, distinguisher before recording database and few people walk backward and forward he/she In the Nguyen Dinh Chieu school, because it’s hard to prepare obstacle in the school, therefore the obstacles appear in this dataset are natural, they’re including static object like a brick column, balcony and moving obstacle is students in the school Table 4.1 shows the specifications of collected database However, due to the limitations in ground truth preparing, I only tested the system only with 243 images extracted from those databases luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 4.2 4.2.1 51 Difficult situation recognition evaluation Obstacle detection evaluation To evaluate the obstacle recognition module, in each image, a mask of the object will be segmented to create the ground truth data To that, I used an interactive segmentation tools from [2] to manually segment the image Then, with each object detected in the point cloud, I convert it back to 2D image to create a mask of the object Each object will be assigned with a different color in the final mask Finally, the result will be evaluated in two different level: pixel level and object level using Jaccard index as shown as follows: J(H, O) = RH∩O RH∪O where: H: hypothesis (detected object’s region) O: object (object’s region in ground truth) RH∩O : Area of intersection region between hypothesis and object in image RH∪O : Area of union region between hypothesis and object in image Then, I defined true positives (TP), true negatives (TN), false positives (FP) as follows: T P : J(H, 0) > 0.5 F P : J(H, 0) < 0.5 or does not exist ground truth data in that region F N : does not exist a hypothesis that matched with ground truth region (T P : J(H, 0) > 0.5) and in the pixel level: T P : ∃PH where PH x = PO x and PH y = PO y F P :6 ∃PO where PH x = PO x and PH y = PO y F N :6 ∃PH where PH x = PO x and PH y = PO y where: PH is obstacle point in hypothesis (detected pixel) PO is obstacle point in object (ground truth pixel) Then, I will measure the detection rate by using precision and recall measurement: TP T P +F P TP T P +F N P recision = Recall = luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 52 For pixel level, I used Watershed algorithm on depth image to segment an object from the background and make the ground-truth Table 4.2 shows the results at pixel level: Table 4.2: Pixel level evaluation result (TP,FP,FN: million pixels) TP 5.02 FP 1.31 FN 2.11 Precision 79% Recall 70% F-Measure 74.2% For object level, I annotated manually each object by a rectangle Table 4.3 shows the results at image level: Table 4.3: Object level evaluation result (TP,FP,FN: objects) TP 344 FP 71 FN 154 Precision 82.9% Recall 69% F-Measure 75.3% The system operates at an average speed of Hz (493 ms/frame) with downsampling block is 2x2 (about 75000 points in point cloud), which is fast enough to be used in practice Fig 4.2 shows average detection time of each step and the whole process 493 500ms 400ms 300ms 200ms 127 165 201 100ms Plane Normal Obstacle Estimation Segmentatio Detection n Total Figure 4.2: Detection time of each step in our proposed method As shown in Table 4.2 and Table 4.3, the precision achieved with the obstacle detection is 82.9% (71/415 objects missed) The precision in pixel level is slightly lower than object level because in this evaluation, obstacle must be well segmented from image while in object level, only obstacle’s bounding boxes will be used to make an evaluation However, almost missed detection objects are far away or obscured by other objects (eg: extinguisher) Besides, there’s still one more limitation is that depth data from Kinect can be loss in some material or in strong light environment In this case, system doesn’t have enough information to detect an obstacle because building point cloud depends a lot on depth image Although obstacle is a ambiguous concept, it depends a lot on luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 53 user’s need so the object can be a obstacle or not and the false detection will cause the annoying to the user but in general, the proposed system can detect an candidate obstacle as defined and it meets the accuracy for the detection and obstacle warning to the user in real time 4.2.2 Stair detection evaluation For stair detection evaluation, I evaluate on the dataset from [30] and dataset collected in my institute, which includes stair and object looks like stair (uniform table array, bookshelf) from Monash and UGent University, Belgium (Fig 4.3) Table 4.4 shows the specifications of stair detection and Table 4.5 shows the results of stair detection algorithm In this table, Positive means there is a stair in the image and Negative means this image not have stair With the Monash and UGent dataset, there are many objects exist concurrent parallel lines similar to stair edges, so it can make confusing with the system to detect a stair, the False Positive is higher than in MICA dataset, where there is no object has concurrent parallel lines except the floor plane With the Positive rate, in MICA dataset, the images are taken from the video sequence and the lighting condition is not good, so the Positive rate is lower than Monash and UGent dataset (40/50 in comparison with 27/27 and 80/90) Table 4.4: Stair dataset for evaluation Dataset Monash UGent MICA Number of positive images 90 27 50 Number of negative images 58 50 Table 4.5: Stair detection result of the proposed method on different datasets Dataset Monash UGent MICA Positive 80/90 27/27 40/50 Negative 51/58 0/0 50/50 TP 80 27 40 FP 0 FN 10 10 Precision 91.95% 100% 100% Recall 88.89% 100% 80% To make the comparison between my proposed method with another method, I have re-implemented the method proposed in [29] as shown in Algorithm and tested with MICA dataset Table 4.6 shows the results: As can be seen in Fig 4.4, Fig 4.5 and Fig 4.6 The Tian’s based method use normal edge detection method, the edge image produced multiple lines around the real edge (see Fig 4.4 C) Therefore, when applying it with Hough transform, the result can luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 54 (a) (b) (c) (d) Figure 4.3: Example stair image to evaluation (A)Positive sample from MICA dataset (B) Negative sample from MICA dataset (C) Positive sample from MONASH dataset (D) Negative sample from MONASH dataset Algorithm Stair detection using RGB-D from [29] 1: 2: 3: 4: 5: 6: 7: Grayscale color image Edge detection (I tested with Canny edge detection algorithm) Line detection using probabilistic Hough transform which is integrated in OpenCV library Merge nearby lines by distance between lines, different in angles Find concurrent parallel lines using line length, line angle, distance between two concurrent lines (more in Algorithm 1) Project concurrent parallel lines to depth image Detect stair on depth image be a lot of line segments for a single stair edge (see Fig 4.4 D) while in my proposed method (see Fig 4.4 H), the edge image has only thin line for stair edge And because in my proposed method, the line equation and stair model are calculated directly on Hough map (see Fig 4.4 I) so the problem with line merging, duplicated line is removed (Fig 4.4 G) luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 55 Table 4.6: Comparison of the proposed method and the method of Tian et al [29] on MICA dataset Method Based on Tian et al [29] My proposed method Positive 20/50 40/50 Negative 50/58 50/58 TP 20 40 FP 0 FN 30 10 Precision 100% 100% (a) (b) (c) (d) (e) (f) (g) (h) (i) Recall 40% 80% Figure 4.4: Detected stair in Tian’s based method (A-F) and detected stair in my proposal method (G-I) (A) Color image (B) Depth image (C) Edges (D) Line segments (E) Detected concurrent lines (F) Depth values on detected lines (G) Detected stair with blue lines are false stair edge and green lines are stair edge (H) Edges Image, (I) Detected peaks in Hough map corresponding to lines in Figure G luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 56 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 4.5: Miss detection in Tian’s based method because of missed depth on stair(AF) and detected stair in my proposed method (G-I) luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Experiments 57 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 4.6: Miss detection in Tian’s based method because of missed depth on stair(AF) and detected stair in my proposed method (G-I) luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Conclusions and Future Works 5.1 Conclusions This thesis has presented a difficult situation recognition system in the context of moving along the lobby of the public building This system contains the obstacle detection module which is can detect a normal object on the moving way and the stair case in front of the user A prototype of the system is also deployed The proposed framework works on the data taken from Kinect, including depth and color image, accelerometer data to build a point cloud using PCL library in order to detect an obstacle For obstacle detection, my algorithm is based on point cloud data with ground plane detection and clustering module The benefit of using point cloud data (or depth data) is that depth data provides additional information about distance for each pixel so that distance from the user to the object can calculate easily as well as extract ground, wall plane, cluster 3D data based on its distance Another advantage is that this kind of data is very reliable since it is measured from physical depth sensor using IR light The disadvantage of this module is that depth data is not good in outdoor environments, and the depth range is limited to meters So the working space of this system is only inside a building, which is a small, closed space With stair detection, the algorithm is using the mixture between depth and color data by exploiting the special structure of stair: a line on the edge of each step By using a color image, the limitation in depth image (measurable range, in-completed depth data) is partially removed By combining depth information, this algorithm also takes the advantage of depth data in order to determine if the image has a stair or not 58 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Conclusions and Future Works 59 Finally, the overall system is proved that fasting enough to run in real-time and can be deployed in the smaller, cheap devices such as embedded system, micro computer 5.2 Future Works In this section, I proposed some improvements for my system and summary it as following: • Concerning obstacle detection evaluation, all obstacles must be segmented manually in each image, it requires a lot of time to make an annotation and segmentation for all images since my data contains large number of videos Therefore, in my thesis, the evaluation is still limited with some datasets (Mica, Gent, Monash) In the near future, I will make a full evaluation of obstacle detection algorithm • The next step I would like to in the near future is that some test on real impaired people using my system with a full scenario and get completely evaluation • In the long term, I will improve my system using Kinect v2 which can provide a better depth data The next thing that I will in the long term is that improved my algorithm takes into account the information of times (detect object in the sequence of frames, not for each frame) and add some popular object which is important with visually impaired people such as door and door state, recognize a text and a classification for each detected obstacle in order to give obstacle’s name to the visually impaired people luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Chapter Publications • Conference(s) [Published]Van-Nam Hoang, Thanh-Huong Nguyen, Thi-Lan Le, ThiThanh Hai Tran, Tan-Phu Vuong, and Nicolas Vuillerme Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In 2015 2nd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), pages 54-59.IEEE, sep 2015 [Accepted] Michiel Vlaminck, Hiep Quang Luong, Hoang Van Nam, Hai Vu, Peter Veelaert, Wilfried Philips Indoor assistance for visually impaired people using a RGB-D camera In The Southwest Symposium on Image Analysis and Interpretation (SSIAI) 2016, New Mexico, USA • Journal(s) [Extended version, Accepted with major revision] Van-Nam Hoang, Thanh-Huong Nguyen, Thi-Lan Le, Thi-Thanh Hai Tran, Tan-Phu Vuong, and Nicolas Vuillerme Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In Vietnam Journal of Computer Science (VJCS) 60 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep Bibliography [1] BBC - Visually impaired see the future - http://news.bbc.co.uk/2/hi/technology/4412283.stm [2] Interactive Segmentation Tool - http://kspace.cdvp.dcu.ie/public/interactivesegmentation/ [3] Photonic Frontiers: Gesture Recognition: Lasers bring gesture recognition to the home - Laser Focus World - http://www.laserfocusworld.com/articles/2011/01/lasersbring-gesture-recognition-to-the-home.html [4] Sound Foresight Technology - http://www.ultracane.com/soundforesigntechnologyltd [5] The VOICE - https://www.seeingwithsound.com/ [6] Alghasra, D M and Saeed, H Y (2013) Guiding Visually Impaired People with NXT Robot through an Android Mobile Application International Journal of Computing and Digital Systems, 2(3):129–134 [7] Barfield, W and Caudell, T (2001) Fundamentals of Wearable Computers and Augmented Reality Taylor & Francis [8] Bernabei, D., Ganovelli, F., Benedetto, M., Dellepiane, M., and Scopigno, R (2011) A low-cost time-critical obstacle avoidance system for the visually impaired In International conference on indoor positioning and indoor navigation [9] Burrus, N Nicolas Burrus Homepage - http://nicolas.burrus.name [10] Ceipidor, U B., D’Atri, E., Medaglia, C M., Mei, M., Serbanati, A., Azzalin, G., Rizzo, F., Sironi, M., Contenti, M., and D’Atri, A (2007) A rfid system to help visually impaired people in mobility In EU RFID Forum, Brussels, Belgium [11] Craven, J (2003) Access to electronic resources by visually impaired people University of Sheffield, Department of Information Studies [12] Hersh, M A and Johnson, M A., editors (2008) Assistive Technology for Visually Impaired and Blind People Springer London, London 61 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep References 62 [13] Hesch, J A., Mariottini, G L., and Roumeliotis, S I (2010) Descending-stair detection, approach, and traversal with an autonomous tracked vehicle In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5525– 5531 IEEE [14] Hoang, V.-N., Nguyen, T.-H., Le, T.-L., Tran, T.-T H., Vuong, T.-P., and Vuillerme, N (2015) Obstacle detection and warning for visually impaired people based on electrode matrix and mobile Kinect In 2015 2nd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), pages 54–59 IEEE [15] Holz, D., Holzer, S., Rusu, R B., and Behnke, S (2011) Real-time plane segmentation using rgb-d cameras In RoboCup 2011: robot soccer world cup XV, pages 306–317 Springer [16] Khoshelham, K and Elberink, S O (2012) Accuracy and resolution of kinect depth data for indoor mapping applications Sensors, 12(2):1437–1454 [17] Kulyukin, V., Gharpure, C., Nicholson, J., and Pavithran, S (2004) RFID in robotassisted indoor navigation for the visually impaired 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat No.04CH37566), 2:1979–1984 [18] Manduchi, R and Kurniawan, S (2011) Mobility-related accidents experienced by people with visual impairment AER Journal: Research and Practice in Visual Impairment and Blindness, 4(2):44–54 [19] Mayol-Cuevas, W., Tordoff, B., and Murray, D (2009) On the Choice and Placement of Wearable Vision Sensors IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 39(2):414–425 [20] Nguyen, B.-H (2015) A haptic device for blind people - http://annualconf.shtp.hochiminhcity.gov.vn/ [21] Nguyen, Q H., Vu, H., Tran, T H., Hoang, V N., and Nguyen, Q H (2015) Detection and estimate the distance to the obstacle warning support for the visually impaired (in vietnamese) In National Conference on Electronics, Communications and Information Technology - ECIT2015, pages 45–50 REV, IEEE [22] Nguyen, Q H., Vu, H., Tran, T H., Nguyen, Dinh Van Hoang, V N., and Nguyen, Q H (2014) Navigational aids to visually impaired people in pervasive environments using robot (in vietnamese) In National Conference on Electronics, Communications and Information Technology - ECIT2014 - Nha Trang- VietNam IEEE, REV luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep References 63 [23] Nguyen, T H., Nguyen, T H., Le, T L., Tran, T T H., Vuillerme, N., and Vuong, T P (2013) A wireless assistive device for visually-impaired persons using tongue electrotactile system In 2013 International Conference on Advanced Technologies for Communications (ATC 2013), pages 586–591 IEEE [24] Perez-Yus, A., Lopez-Nicolas, G., and Guerrero, J J (2014) Detection and Modelling of Staircases Using a Wearable Depth Sensor Second Workshop on Assistive Computer Vision and Robotics (ACVR) held with ECCV2014 [25] Rusu, R B and Cousins, S (2011) 3D is here: Point Cloud Library (PCL) In 2011 IEEE International Conference on Robotics and Automation, pages 1–4 IEEE [26] Se, S and Brady, M (2000) Vision-based detection of staircases Fourth Asian Conference on ComputerVision (ACCV) [27] Tang, H., Vincent, M., Ro, T., and Zhu, Z (2013) From RGB-D to low-resolution tactile: Smart sampling and early testing In 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pages 1–6 IEEE [28] Tang, T J J., Lui, W L D., and Li, W H (2012) Plane-based detection of staircases using inverse depth Australasian Conference on Robotics and Automation, pages 3–5 [29] Tian, Y (2014) RGB-D Sensor-Based Computer Vision Assistive Technology for Visually Impaired Persons Computer Vision and Machine Learning with RGB-D sensors, pages 173–194 [30] Vlaminck, M., Jovanov, L., Van Hese, P., Goossens, B., Philips, W., and Pizurica, A (2013) Obstacle detection for pedestrians with a visual impairment based on 3D imaging In 3D Imaging (IC3D), 2013 International Conference on, pages 1–7 IEEE [31] Weisstein, E W Sweep Signal - http://mathworld.wolfram.com/SweepSignal.html [32] Wicab, I BrainPort V100 Vision Aid - http://www.new.wicab.com/ [33] Ză ollner, M., Huber, S., Jetter, H C., and Reiterer, H (2011) NAVI - A proofof-concept of a mobile navigational aid for visually impaired based on the microsoft kinect Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6949 LNCS(c):584–587 luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep an to nghiep docx 123docz luan van hay luan van tot nghiep

Ngày đăng: 03/06/2023, 13:11

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w