1. Trang chủ
  2. » Luận Văn - Báo Cáo

A 3D panoramic image guiding system design and implementation for Minimum invasive surgery

95 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 95
Dung lượng 6,33 MB

Nội dung

A 3D panoramic image guiding system design and implementation for Minimum invasive surgery A 3D panoramic image guiding system design and implementation for Minimum invasive surgery A 3D panoramic image guiding system design and implementation for Minimum invasive surgery luận văn tốt nghiệp,luận văn thạc sĩ, luận văn cao học, luận văn đại học, luận án tiến sĩ, đồ án tốt nghiệp luận văn tốt nghiệp,luận văn thạc sĩ, luận văn cao học, luận văn đại học, luận án tiến sĩ, đồ án tốt nghiệp

逢 甲 大 學 電機與通訊工程博士學位學程 博士論文 一個使用於微創手術的三維影像導引系統之 設計與實作 A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 指導教授:鄭經華博士 研 究 生 :金泰廷 中 華 民 國 一 百 零 九 年 二 月 A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Acknowledgements I cannot complete the work of this thesis without the aid of many people First of all, I would like to thank my advisor Prof Dr Ching-Hwa Cheng He allowed me to join his lab “Information and Technology Laboratory” at Feng Chia University and to work on this challenging research project That allowed me to study and discover a lot of exciting things in medical imaging processing I especially appreciate his dedicated guidance as well as financial support and lab equipment I am very grateful to Prof Dr Tang-Chieh Liu for taking the time to participate in our weekly meetings and his helpful scientific advice Besides, I would like to thank him for reading my paper manuscripts and giving comments and guidance Special thanks go to Dr Kai Che Jack Liu and Dr Wayne Shih Wei Huang from the IRCAD Taiwan/AITS, Taiwan, for enabling me to conduct in-vio animal experiments Their collaboration on the medical aspect of our research project was a great source of motivation I am also thankful to friends in my laboratory for the great time spent together and especially to Thang, Toan, Tuan, and Tai for their suggestions and motivation Last but not least, I want to express gratefulness to my family and especially my wife for their unlimited support and belief in me Their continuous spiritual support is the motivation for me to complete the work of this thesis i FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Abstract Minimally invasive surgery (MIS) is gradually replacing traditional surgical methods because of its advantages, such as causing less injury, preventing unsightly surgical scars, and resulting in faster recovery time In MIS, the surgeon performs surgery by observing images transmitted from the endoscope Therefore, there are three significant challenges in MIS, which is a limited field of view (FOV), lack of depth perception, and viewing angle control during surgery This thesis aims to explore how to solve these challenges using only images provided by the endoscopic camera without the requirement for the additional device in the operating room We proposed a 3D Panoramic Image Guiding System for MIS to provide surgeons a broad, optimal and stable view field with Focus‑Area 3D‑Vision in the surgical area We have designed a new endoscope that consists of two endoscopic cameras attached to the tip of one tube With the two-view images captured simultaneously from the two lenses, the proposed algorithm can combine two camera’s FOV into one larger FOV The overlap area of the two cameras was also displayed in the 3D space Besides, our system could be a 3D measurement tool for endoscopic surgery Finally, a surgical tool detection algorithm is proposed to evaluate the surgical skills and control camera position during MIS The experiments of the proposed system were performed on phantom model images and in-vivo animal images Experimental results confirm that our system is feasible and give promises to improve existing limitations in laparoscopic surgery Keywords: minimally invasive surgery (MIS), image stitching, 3D reconstruction, surgical tool detection ii FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Contents Chapter Introduction 1.1 Minimally Invasive Surgery 1.2 Computer Assisted Interventions 1.2.1 Image stitching 1.2.2 3D reconstruction 1.2.3 Surgical Tool Detection 1.3 Thesis Overview 1.3.1 Proposed Endoscope 1.3.2 Problem Description 1.3.3 Thesis Structure 1.4 Thesis Contributions Chapter Image Stitching in Minimally Invasive Surgery 10 2.1 Introduction 10 2.2 Features-based Image Stitching 11 2.2.1 Feature Detection 12 2.2.2 Feature Matching 14 2.2.3 Find homography matrix 14 2.2.4 Image Warping 15 2.2.5 Seam Estimation 15 2.2.6 Image Blending 16 2.3 Proposed Video Stitching 16 2.3.1 Accelerating Image Registration 17 2.3.2 Accelerating Image Composition 19 2.4 Results 20 2.4.1 Video-stitching Results 20 2.4.2 Run Time Estimation 21 2.4.3 Discussion 23 2.5 Conclusions 25 Chapter 3D Reconstruction in Minimally Invasive Surgery 26 3.1 Introduction 26 3.2 Proposed Stereo Reconstruction 27 3.2.1 Image Rectification 27 iii FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 3.2.2 Disparity Map 31 3.2.3 Dense Reconstruction 33 3.3 Results 35 3.3.1 Experimental Description 35 3.3.2 Evaluation of the Disparity Map and 3D Reconstruction 36 3.3.3 Evaluation of Distance Measurement 39 3.3.3 Run time Evaluation 40 3.4 Discussion 41 3.5 Conclusion 41 Chapter Video Stitching in Minimally Invasive Surgery 42 4.1 Introduction 42 4.2 The Proposed Image-Stitching Algorithm 44 4.2.1 Image Registration 44 4.2.2 Image Compositing 46 4.3 The proposed Video Stitching Algorithm 46 4.3.1 Stitching Video at increased Speed 46 4.3.2 Increasing the Stability of the stitched video 48 4.4 Experimental Results 48 4.4.1 Video-stitching results 49 4.4.2 Comparison with the Previous Method 50 4.5 Discussion 53 4.4 Conclusions 54 Chapter Surgical Tool Detection in Minimally Invasive Surgery 55 5.1 Introduction 55 5.2 Surgical Tool Detection 55 5.2.1 Dataset 56 5.2.2 Method 56 5.2.3 Results 58 5.3 Surgical Tool-Instance Segmentation 59 5.3.1 Dataset 59 5.3.2 Method 60 5.3.3 Results 62 5.4 Conclusions 63 Chapter 3D Panoramic Image Guiding System for Minimum Invasive Surgery 64 6.1 Introduction 64 6.2 Hardware 64 6.3 Software 65 iv FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 6.3.1 Video Stitching 65 6.3.2 3D Image 65 6.3.3 Measurement 66 6.3.4 Tool Detection 66 6.3.5 Tool Tracking 66 6.3.6 Robot Control 66 6.4 Results 67 6.5 Conclusion 69 Chapter Conclusion 70 7.1 Contributions 70 7.2 Limitations 71 7.3 Future Work 71 v FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery List of Figures Figure 1.1: The open surgery procedure Figure 1.2: (a) The minimal invasive surgery procedure (b) The set up for MIS Figure 1.3: The robotic-assisted MIS Figure 1.4: Examples of the image stitching from a moving camera: (a) Behrens et al [2], (b) Liu et al [6] and (c) Ali et al [4] Figure 1.5: Example of the 3D reconstruction base on moving a monocular endoscope [12] Figure 1.6: Example of surgical tool detection [23] Figure 1.7: The proposed endoscope system consisted of two cameras, a mechanical tube, and a push-button The figure depicts (a) the endoscopic cameras, (b) the geometric arrangement between the two cameras, (c) the primary state of the device and (d) the working state of the device Figure 1.8: The schematic diagram of our endoscope system The two images on the left side indicate the input images obtained from the two lenses Through the USB ports on the PC in the center, three outputs can be derived by our algorithm The three images at the right side indicate a window displaying a 3D image, a window showing an extended 2D view, and another window showing the instrument detection result Figure 2.1: The combination of the two limited camera’s FOV into a wider FOV 11 Figure 2.2: Flowchart of image-stitching process 11 Figure 2.3: Calculating the sum of pixel intensities inside any rectangular region will only require three additions and four memory accesses by using the integrated image: ∑=A-BC+D 12 Figure 2.4: Left to right and top to bottom: The Gaussian second-order derivatives Lxx, Lyy , Lxy (top row) and their approximations Dxx, Dyy and Dxy (bottom row) 13 Figure 2.5: Conventional video-stitching algorithm 17 Figure 2.6: Proposed video-stitching algorithm 17 Figure 2.7: Overlap region during image stitching at frame t and frame t+1 (red), and ROI region during image stitching at frame t (left, yellow), and small region during image stitching at frame t+1 (right, yellow) 17 Figure 2.8: ROI of Frame-1 Four corners of Frame-2 are transformed into four points P1, P2, P3, and P4 Red rectangle is rectangle surrounding Frame-2*’s edges and is parallel to Frame-1 ROI of Frame-1 is intersection of Frame-1 within red rectangle (green rectangle) 18 vi FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure 2.9: The image-stitching result (phantom model) The result expands the original FOV of the input image by 60% 20 Figure 2.10: The image-stitching result (animal experiment) The result expands the original FOV of the input image by 55% 20 Figure 2.11: Comparison of image registration times for conventional method (blue) and proposed method (green) on the CPU computer (a) and the computer with an additional GPU (b) 21 Figure 2.12: Comparison of seam estimation times for conventional method (blue) and proposed method (green) on the CPU computer (a) and the computer with an additional GPU (b) 22 Figure 2.13: Comparison of the stitched image for the conventional method and the proposed method: (a) input images; (b) matching feature points and (c) stitched image by conventional method; and (d) matching feature points and (f) stitched image by our method; (e) ground truth 23 Figure 2.14: Transformation of Frame-2 into Frame-2*: (a) quadrilateral and (b) nonquadrilateral 24 Figure 2.15: Images captured by four cameras 24 Figure 2.16: Result of image stitching of four input images (area expansion ratio is 300%) 24 Figure 3.1: 3D reconstruction of the cameras’ overlap by our endoscope 26 Figure 3.2: The stereo reconstruction algorithm There are three steps: Image rectification, Disparity calculation, 3D reconstruction 27 Figure 3.3: Pinhole camera model is used in this study 28 Figure 3.4: Radial distortion of the lens: (a) No distortion, (b) Positive distortion and (c) Negative distortion 29 Figure 3.5: Image rectification: (a) two input images, (b) two output aligned images 30 Figure 3.6: Disparity computation algorithm by stereoBM The sum of absolute differences (SAD) and Winner Takes All (WTA) are used for the disparity computation 31 Figure 3.7: Disparity map calculation algorithm consists of three steps: (1) Compute disparity map by StereoBM, (2) Compute WLS disparity map and confidence map by WLS, (3) Compute WLS-FBS disparity map by FBS 32 Figure 3.8: A stereo camera model 33 Figure 3.9: 3D reconstruction from ROI and disparity map 34 Figure 3.10: Phantom model datasets 35 Figure 3.11: In-vivo animal datasets 36 Figure 3.12: The qualitative evaluation results of the disparity map Column1: Roi image is the overlaped area of two input images Column 2: Raw disparity map, computed by StereoBM Column 3: WLS disparity map, filtered by WLS Column 4: WLS-FBS disparity map, filtered by WLS+FBS 37 vii FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure 3.13: The qualitative evaluation results of the 3D reconstruction for four datasets Col 1: Point cloud of raw disparity map Col 2: Point cloud of WLS disparity map Col 3: Point cloud of WLS-FBS disparity map 39 Figure 3.14: Comparision of the estimated distance with the actual distance Each side of AC, AD, AE, AF, AG, AH and AK represents the estimated distance (yellow) and the actual distance (green) 39 Figure 3.15: Comparision of the estimated depth with the actual depth in the phantom model experiment 40 Figure 4.1: Proposed panoramic endoscope system 42 Figure 4.2: An illustrative example of the SURF-based stitching algorithm for MIS showing: (a) only a few matching features are obtained and distributed unequally and (b) error stitching 43 Figure 4.3: Two consecutive frames in the stitched video 43 Figure 4.4: Proposed image-stitching algorithm 44 Figure 4.5: The ROI-grid method: the corresponding point pairs are determined based on the disparity value The ROI (dark yellow) is the region that is used to calculate disparity The ROI is divided into a × 24 grid and each grid’s peak (P) is used to determine the corresponding point (Q) in the right rectified-image 45 Figure 4.6: Decreasing the computing time that is required to stitch video using the downsizing technique (pink area) 47 Figure 4.7: Video stitching results from various samples for in-vivo animal trials Left and Middle: Two input images captured from two endoscopic cameras Right: The stitching results 49 Figure 4.8: Comparison of the stitching result for SURF-based method & proposed method Left: SURF-based stitching result Middle: The proposed stitching result Right: Ground Truth 51 Figure 4.9: Frame rate for both methods: SURF-based stitching (blue) and our stitching (orange) 52 Figure 4.10: The proposed method increases the FOV of the input image by 35% and is used to reconstruct the dense surface image of the overlapping area 53 Figure 4.11: Our endoscope is located about cm from the surgical area The proposed endoscope system can expand the FOV of input images by up to 188% 54 Figure 5.1: The Surgical instrument detection base on CNN 55 Figure 5.2: The seven surgical tools is used in cholecystectomy surgery (top row) and their location annotations (bottom row) 56 Figure 5.3: YOLO-based surgical tool detection 56 Figure 5.4: The YOLO architecture 57 Figure 5.5: The resized image is divided into an N × N grid cell Each grid cell prophesies (B) boxes with objectness score (p0), and class scores (p1, p2 …pC) 57 Figure 5.6: Examples of surgical tool detection using YOLO 58 viii FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 6.5 Conclusion In this chapter, we described the hardware and software of the proposed system Experiments and results are also presented These results show that the proposed system can address existing problems in MIS Our system allows to expand the limited viewing angles of the conventional endoscope, while also displaying the 3D surface of the operating area as well as the depth and distance between organs A moving trajectory of the tip of a surgical tool is also recorded to assess a surgeon's skills The proposed system also allows controlling the position of the camera to ensure the best viewing area of the operating area However, the proposed system still has some limitations Firstly, the Jetson nano kit is still weak, so the processing speed is quite slow (only about fps) and not suitable for real-time applications We will replace this kit with a more powerful kit in the near future Secondly, the robot arm uses stepping motors to control, so that these rotating motors will appear slightly shaken, making the observation image blurry and affect the accuracy of the proposed algorithm We will also replace this robotic arm by a better robotic arm for a smoother and more stable movement After these improvements, we will test the reliability of this system for in-vivo animal trials 69 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Chapter Conclusion In this dissertation, we have described the 3D panoramic image guiding system for MIS without the need for additional hardware Only with image processing algorithms, we can increase the limited endoscope viewing angle, provide 3D images, and be able to identify and locate the surgical tool during MIS 7.1 Contributions In chapter 2, we have proposed the feature-based image stitching algorithm We also have introduced a down-sized ROI technique that can combine with SURF to improve the speed of the registration Besides, we also combined the down-sized ROI with the Graph-cut algorithm to speed up the image composition The experimental results obtained showed that the proposed algorithm could enhance the image size by up to 160% As compared to the conventional method, the proposed one results in performance improvements of 10× CPU computer and 23× PC with an additional GPU In chapter 3, we demonstrated a 3D reconstruction from our endoscope system The overlap area of the two cameras was displayed in the 3D space with sufficiently good quality Besides, our system could also be a 3D measurement tool for endoscopic surgery with an error of about mm A robust video stitching algorithm is proposed and demonstrated in Chapter The trial results confirm that proposed endoscope can increase the conventional endoscopes’s FOV by 155% Our endoscope system operates stably at a frame rate up to 27 fps on a single CPU computer for two endoscopic cameras at a resolution of 640 × 480 The proposed stitching method is 1.55 faster and produces results that are closer to ground-truth than the SURF-based method that was used in chapter In chapter 5, we demonstrated a surgical tool detection for MIS in real-time using a CNN (Convolutional Neural Network) Also, we have provided a new dataset for the instance segmentation of the surgical tool Experimental results have demonstrated the feasibility of the our method In chapter 6, we designed a 3D Panoramic Image Guiding System (3DPIGMIS) to assist the surgeon during MIS Our system allows to expand the limited FOV of the conventional endoscope, while also displaying the 3D surface of the operating area as well as the depth and distance between organs A moving trajectory of the tip of a surgical tool is also recorded to assess a surgeon's skills The proposed system also allows controlling the position of the camera to ensure the best viewing area of the operating area 70 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 7.2 Limitations However, this study has certain limitations of its own, which are as follows: first, both the sizes of the 3D image and the stitched image depend on the overlap's percentage of the two cameras For example, as the two cameras move closer to the operating area, the overlap's rate becomes smaller In this case, our system will increase the camera's FOV with a higher expansion rate, while the 3D image of the overlap area will show less information Although this is a limitation, when the camera is close to the operating area, the expansion of the camera's FOV is more necessary Furthermore, in cases where the distance from the cameras to the surgical area is less than cm, there may not be an overlap between the two cameras Therefore, the proposed algorithm cannot be implemented Second, the dataset for surgical tool detection and instance segmentation uses only cholecystectomy surgery videos, and the number of images is limited The manual annotation takes a lot of time Therefore, at present, we not provide a sufficiently large dataset with many different types of surgical tools in MIS In addition, in this study, we took the center of the rectangle containing the tip of the surgical tool, which is not the exact point to identify the pose of the surgical tool For our system, the Jetson nano kit is still weak, so the processing speed is quite slow (only about fps) and not suitable for real-time applications Next, the robot arm uses stepping motors to control, so that these rotating motors will appear slightly shaken, making the observation image blurry and affect the accuracy of the our algorithm 7.3 Future Work In the future, we will improve the disparity map calculation algorithm, which can simultaneously affect the accuracy of stitching and 3D reconstruction This work has just stopped identifying and locating the box or boundary of surgical instruments In the next study, we aim to estimate the tool's pose, which provides essential information for control in robotic MIS For hardware, we will replace the Jetson nano kit with a more powerful kit in the near future Besides, we will replace this robotic arm by a better robotic arm for a smoother and more stable movement After these improvements, we will test the reliability of this system for in-vivo animal trials 71 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Appendix (This section answers some reviewer's questions) Q1 Real-surgical evaluation and validation A1: Thank you for the comment In this study, we tested the video stitching, 3D reconstruction, and surgical tool detection for the phantom model images and in-vivo animal images (in chapters 2, 3, 4, and 5) The camera position control is only implemented on phantom model experiments At present, we have not done a real mechanical tube for the proposed endoscope Besides, the robot arm uses stepping motors to control, so that these rotating motors will appear slightly shaken, making the observation image blurry and affect the accuracy of our algorithm We will replace this robotic arm with a better robotic arm for a smoother and more stable movement After completing these works, we will test the reliability of the proposed system for in-vivo animal trials and then for real surgical Q2 The depth calibration and validation by change different slopes and angles A2: Thank you for the comment For the depth validation, we estimated the distance at different slope angles (compared to a table) using a ruler, as shown in Table & Fig (a) angle = 00 (b) angle = 150 (c) angle = 250 (d) angle = 350 (e) angle = 450 (f) angle = 600 Figure Evaluation distances at different angles 72 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Table Evaluation distances at different angles Angles 00 150 250 350 450 600 Distance (20 mm) 19.9 19.8 19.9 20.1 20.3 20.0 These results show that the distance evaluated by the proposed method is quite accurate (error of only about 0.3 mm) when changing the slope angle In this study, we employed Bouguet’s algorithm in OpenCV for image rectification We captured concurrently 20 images of a 14 × 11 chessboard for both cameras placed at a distance of cm to 15 cm and different slope angles (compared to the camera plane) These angles must ensure that the chess corners are clearly visible for both the two cameras Therefore the rectification process only works well when this angle is not greater than 60 degrees When the slope angle is greater than 60 degrees, the two cameras are not aligned properly after the rectification Therefore, the disparity map is calculated inaccurately, leading to the estimated distance is also unreliable and unstable Q3 The depth error validation by quantities representations (error v.s depth) A3: Thank you for the comment Fig shows the estimated depth in comparison with the actual depth These results demonstrate that both the curves of the depths were almost the same and in the range between 3–15 cm (see Fig 3.15 in chapter 3) These results are consistent with that of MIS, where the camera’s position to the operating area was quite close At this distance range, the estimated depth error is only about 0.5 mm When the depth is greater than 15 cm, the rectification results are not accurate, so the estimated depth will be inaccurate and unreliable 73 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure Comparision of the estimated depth with the actual depth The estimated depth error is only about 0.5 mm Q4 Make more comments into the slides, not only figures/pictures A4: Thank you for the comment Your suggestions will help me greatly improve future presentations Q5 The lower and upper bounds of the image stitching topics, overlap area with the proposed endoscopes Rate (%) A5: Thank you for the comment For the proposed endoscope, the distance between the two cameras can be adjusted to change the camera's overlap ratio and FOV expansion rate In this study, we have set up this distance to be 1.5 cm at the “working state” Then, the overlap ratio and the expanded ratio at various depths are given in the Fig below 100 90 80 70 60 50 40 30 20 10 88 73 54 55 46 61 45 39 27 76 72 73 66 69 34 31 28 27 24 12 1.5 80 81 83 20 19 17 10 11 15 18 Depth (cm) Figure The overlap ratio (blue) and the expanded ratio (orange) at various depths 74 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Fig shows that the expansion ratio and the overlap ratio is always opposite to each other At a depth of 1.5 cm, the overlap ratio is 12%, and the expanded rate is 88% Hence, the proposed algorithm can extend the conventional camera's FOV up to 100 + 88 = 188%, while the area displayed in 3D is only 12% At depths less than 1.5 cm, there may not be an overlap between the two cameras In this case, our method is not implemented Therefore, the lower bound is defined as a depth of 1.5 cm When we moved the camera away from the surgical area to a depth of 18 cm, the overlap ratio was 83%, and the expanded rate was 17% This means that the proposed algorithm only extends the camera's FOV up to 17% while the 3D rendered area is 83% At depth greater than 18 cm, the two cameras are not aligned properly after the rectification transformation This reduces the accuracy of the algorithm Therefore, the depth of 18 cm is considered the upper bound of the proposed algorithm Thus, when the depth or distance from the endoscope to the operating area is within 1.5 cm to 18 cm, our algorithm can work well Q6 Please more comment on how to select the parameters in your software works A6: Thank you for the comment Our software includes the following windows that allow parameter selection and adjustment Acceleration: it speeds up algorithm performance by reducing image resolution N times 3D image: “1” to see the 3D reconstruction (Fig 5) Show Disparity: “1” to see the disparity map (Fig 7) Video Stitching: “1” to see the video stitching (Fig 8) Measurement: “1” to measure any two points in the overlap area (Fig 6) Tool Detection: “1” for the YOLO detection, “2” for the CamShift detection (Fig 6) Yolo Accuracy: YOLO reliability parameters Tool Tracking: “1” to see the tool’s tip tracking Tool Size: Limited size of bounding box [Tool_Size*50, Tool_Size*50] to eliminate false detection Overlap width: It is used to adjust the width of the overlap area Refined Disparity: “1” to refined disparity map Figure Control panel Show information: “1” to show the point’s coordinates and distances (Fig 6) Waiting: “1” to “pause” the algorithm CAM: “1” to change the position of the two cameras (left to right or right to left) 75 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure Control panel Figure 3D reconstruction Figure Disparity map module Max_disp: It is used to adjust the width of the overlap area 2×wsize+1: Search window-size parameter (blockSize) for StereoBM method Scale_color: The parameter adjusts the color display of the disparity map Sigma: The parameter of DisparityWLSFilter() function to refined disparity map Spatical, luma, chroma, lamda: The parameters of FastBilateralSolverFilter() function in OpenCV to refined disparity map Refine Disparity: it is equal to “1” to refined disparity map 76 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure Video Stitching module Acceleration: it speeds up stitching performance by reducing image resolution N times Seam_scale: it speeds up the seam estimation by reducing image resolution N times Exposure: The parameter for compensate exposure in the specified image by different methods Seam: The parameter for the seam estimation by different methods Blend: The parameter for blending images by different methods stable: The parameter for stabilizing video stitching Threshold: The threshold value for stabilizing video stitching Rotation: Image rotation angle Q7 The comparison of the panoramic image by your method and the method by wideangle camera with image process A7: We agree with the comments put across by reviewers regarding the existence of an endoscope that has a wide-angle object lens attached to the tip of the endoscope to allow physicians obtain a broad view of the surgical images Upon evaluating the information found in [R1], we discover that Yamauchi et al was a pioneer in designing a dual-view endoscope to capture a zoomed view and a wide-angle view simultaneously with the help of an image-shifting prism In their initial model, they employed a Porro prism to split the beam thus half of the light is used to acquire a panoramic view of the images Although 77 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery they managed to achieve their goal, their method had a number of limitations including low light throughput and low resolution By carrying out more research on how to achieve a panoramic view of an image, we found that the goal can be achieved through the use of panomorph lenses [R2] or with the help of mirror attachments [R3] However, this approach also encountered a multitude of issues ranging from aberrations to blind zones In the present study, we came up with another method that makes use of image processing technology to produce wide-view images In the study, with only the use of two endoscopic cameras and without modifying the hardware Here, the FOVs of the two cameras are combined to form a single and larger FOV, as shown in Fig Experimental results reveal that our method can be effectively used to develop undistorted and wide images with a higher resolution Figure 9: Large FOV provided by the two endoscopic cameras [R1] Y Yamauchi, J Yamashita, Y Fukui, K Yokoyama, T Sekiya, E Ito, M Kanai, T Fukuyo, D Hashimoto, and H Iseki, “A dual-view endoscope with image shift,” in proceedings of Computer assisted radiology and surgery (Paris, France, 2002), pp 183–187 [R2] P Roulet, P Konen, M Villegas, S Thibault and P Garneau, "360° Endoscopy using panomorph lens technology," vol 7558, 2010 [R3] S.-M Tseng and J.-C Yu, "Panoramic endoscope based on convex parabolic mirrors," Optical Engineering, vol 57, pp 1, 2018 Q8 4-camera image stitching by 3D-depth information, not only 2D registration, to prove using the depth information for generating a 3D-panoramic image A8: Thank you for the comment For the proposed endoscope, which includes two lenses, we can provide a 2D Panoramic view with an overlap-area 3D vision for MIS This is because we only have the depth of the overlap area without the depth of the rest In the future, we will extend the proposed endoscope with multiple streamlined lense (ex cameras) In this way, we can produce the numerous overlap areas with depth information, then stitch these overlap areas into a 3D panoramic image The current work stops at improving performance for the endoscope with two lenses, such as quality, stability and computation time 78 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Q9 Improve the problem of the stitched image not stable when the camera a little bit movement A9: Thank you for the comment In this study, the proposed algorithm compares medium re-projection error ME(Hprevious) with a specified threshold value to reduce the variation in the homography matrix which increases the stability of the stitched videos (see 4.4.2 section in chapter 4) For defence demonstration, this threshold value is set to pixels This value is a bit small, so the video is not stable After adjusting this value to pixels, we were able to improve the stability of the stitched video when the camera moved a bit, as show in Fig 10 https://youtu.be/PUAEbmWPQLA (a) Input videos (b) Stitched video Figure 10: Stitched video from two input videos (https://youtu.be/PUAEbmWPQLA) Q10 The survey work and patent finding for the proposed two camera instrument mechanism 3D distance measurement in generating 3d-view A10: Thank you for the comment To the best of my knowledge, there is no endoscope similar to the one proposed We will soon design and make a real mechanical tube for the proposed endoscope as described in this thesis 79 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery References M Reeff, F Gerhard, P Cattin, and G Székely, Mosaicing of Endoscopic Placenta Images, 2006 A Behrens, M Bommes, T Stehle, S Gross, S Leonhardt, and T Aach "Real-time image composition of bladder mosaics in fluorescence endoscopy," Computer Science - Research and Development, vol 26, no 1, pp 51-64 D.K Iakovidis, E Spyrou, and D Diamantis, "Efficient homography-based video visualization for wireless capsule endoscopy," in 13th IEEE International Conference on BioInformatics and BioEngineering, pp 1-4: 2013 S Ali, K Faraz, C Daul, and W Blondel, "Optical flow with structure information for epithelial image mosaicing," in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 1981-1984: 2015 L Yang, J Wang, T Ando, et al "Towards scene adaptive image correspondence for placental vasculature mosaic in computer assisted fetoscopic procedures," Int J Med Robot, vol 12, no 3, pp 375-386 J Liu, B Wang, W Hu, et al "Global and Local Panoramic Views for Gastroscopy: An Assisted Method of Gastroscopic Lesion Surveillance," IEEE Transactions on Biomedical Engineering, vol 62, no 9, pp 2296-2307 W Hu, X Zhang, B Wang, et al "Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery," PLOS ONE, vol 11, no 4, p e0153202 D Scharstein and R Szeliski "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms," International Journal of Computer Vision, vol 47, no 1, pp 7-42 R.A Hamzah and H Ibrahim "Literature Survey on Stereo Vision Disparity Map Algorithms," Journal of Sensors, vol 2016, Article ID 8742920, p 23 10 L Maier-Hein, P Mountney, A Bartoli, et al "Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery," Medical Image Analysis, vol 17, no 8, pp 974-996 11 K.L Lurie, R Angst, D.V Zlatev, J.C Liao, and A.K Ellerbee Bowden "3D reconstruction of cystoscopy videos for comprehensive bladder records," Biomedical Optics Express, vol 8, no 4, pp 2106-2123 12 N Mahmoud, A Hostettler, T Collins, L Soler, C Doignon, and J Montiel "SLAM based quasi dense reconstruction for minimally invasive surgery scenes," arXiv preprint arXiv:170509107 13 L Chen, W Tang, N.W John, T.R Wan, and J.J Zhang "SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality," Computer Methods and Programs in Biomedicine, vol 158, pp 135-146 14 T Collins and A Bartoli, "Towards Live Monocular 3D Laparoscopy Using Shading and Specularity Information," pp 11-21, Berlin, Heidelberg: Springer Berlin Heidelberg: 2012 15 D Stoyanov, M.V Scarzanella, P Pratt, and G.-Z Yang, "Real-Time Stereo Reconstruction in Robotically Assisted Minimally Invasive Surgery," pp 275-282, Berlin, Heidelberg: Springer Berlin Heidelberg: 2010 16 D Stoyanov, A Darzi, and G Zhong Yang, A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery, 2005 17 D Stoyanov, A Darzi, and G.Z Yang, "Dense 3D Depth Recovery for Soft Tissue Deformation During Robotically Assisted Laparoscopic Surgery," in Medical Image Computing and ComputerAssisted Intervention – MICCAI 2004: 7th International Conference, Saint-Malo, France, 80 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery September 26-29, 2004 Proceedings, Part II, C Barillot, D.R Haynor, andP Hellier Eds., pp 4148, Berlin, Heidelberg: Springer Berlin Heidelberg: 2004 18 S Bernhardt, J Abi-Nahed, and R Abugharbieh, "Robust Dense Endoscopic Stereo Reconstruction for Minimally Invasive Surgery," pp 254-262, Berlin, Heidelberg: Springer Berlin Heidelberg: 2013 19 S Rohl, S Bodenstedt, S Suwelack, et al "Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration," Med Phys, vol 39, no 3, pp 1632-1645 20 B Münzer, K Schoeffmann, and L Böszörmenyi "Content-based processing and analysis of endoscopic images and videos: A survey," Multimedia Tools and Applications, vol 77, no 1, pp 1323-1362 21 V Penza, J Ortiz, L.S Mattos, A Forgione, and E De Momi "Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery," Int J Comput Assist Radiol Surg, vol 11, no 2, pp 197-206 22 C Wang, F.A Cheikh, M Kaaniche, and O.J Elle, Liver surface reconstruction for image guided surgery SPIE, 2018 23 D Bouget, M Allan, D Stoyanov, and P Jannin "Vision-based and marker-less surgical tool detection and tracking: a review of the literature," Medical Image Analysis, vol 35, pp 633-654 24 K Jo, B Choi, S Choi, Y Moon, and J Choi, "Automatic detection of hemorrhage and surgical instrument in laparoscopic surgery image," in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 1260-1263: 2016 25 K Cai, R Yang, Q Lin, and Z Wang "Tracking multiple surgical instruments in a near-infrared optical system," Computer Assisted Surgery, vol 21, pp 46-55 26 M Kranzfelder, A Schneider, A Fiolka, et al "Real-time instrument detection in minimally invasive surgery using radiofrequency identification technology," The Journal of surgical research, vol 185 27 I Laina, N Rieke, C Rupprecht, et al., Concurrent Segmentation and Localization for Tracking of Surgical Instruments, 2017 28 L Cheolwhan, W Yuan-Fang, D.R Uecker, and W Yulun, "Image analysis for automated tracking in robot-assisted endoscopic surgery," in Proceedings of 12th International Conference on Pattern Recognition, pp 88-92 vol.81: 1994 29 A Reiter and P.K Allen, "An online learning approach to in-vivo tracking using synergistic features," in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 34413446: 2010 30 D Bouget, R Benenson, M Omran, L Riffaud, B Schiele, and P Jannin "Detecting Surgical Tools by Modelling Local Appearance and Global Shape," IEEE Transactions on Medical Imaging, vol 34, pp 1-1 31 A Reiter, P.K Allen, and T Zhao, "Feature Classification for Tracking Articulated Surgical Tools," pp 592-600, Berlin, Heidelberg: Springer Berlin Heidelberg: 2012 32 A Twinanda, S Shehata, D Mutter, J Marescaux, M De Mathelin, and N Padoy "EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos," IEEE Transactions on Medical Imaging, vol 36 33 M Sahu, A Mukhopadhyay, A Szengel, and S Zachow "Tool and Phase recognition using contextual CNN features." 34 A Raju, S Wang, and J Huang "M2CAI surgical tool detection challenge report," University of Texas at Arlington, Tech Rep 35 A.P Twinanda, D Mutter, J Marescaux, M de Mathelin, and N Padoy "Single-and multi-task architectures for tool presence detection challenge at M2CAI 2016," arXiv preprint arXiv:161008851 36 MCCAI "Tool Presence Detection Challenge Results." 37 A Jin, S Yeung, J Jopling, et al., Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks, 2018 38 D.T Kim, V.T Nguyen, C.-H Cheng, D.-G Liu, K.C.J Liu, and K.C.J Huang "Speed Improvement in Image Stitching for Panoramic Dynamic Images during Minimally Invasive Surgery," Journal of Healthcare Engineering, vol 2018, Article ID 3654210, p 14 81 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 39 D.T Kim, C.H Cheng, D.G Liu, K.-C.J Liu, S.W.W Huang, and S.T Tran "Performance Improvement for Two-Lens Panoramic Endoscopic System during Minimally Invasive Surgery," Journal of Healthcare Engineering, vol 2019 40 D.T Kim, C.-H Cheng, D.-G Liu, K.C.J Liu, and W.S.W Huang "Designing a New Endoscope for Panoramic-View with Focus-Area 3D-Vision in Minimally Invasive Surgery," Journal of Medical and Biological Engineering, pp 1-16 41 D.T Kim, C.-H Cheng, and D.-G Liu, "A Stable Video Stitching Technique for Minimally Invasive Surgery," in Proceedings of the 2019 9th International Conference on Biomedical Engineering and Technology, pp 266-269: ACM: 2019 42 D.-T Kim and C.-H Cheng, "A panoramic stitching vision performance improvement technique for minimally invasive surgery," in 2016 5th International Symposium on NextGeneration Electronics (ISNE), pp 1-2: IEEE: 2016 43 R Szeliski, "Image Alignment and Stitching," in Handbook of Mathematical Models in Computer Vision, N Paragios, Y Chen, andO Faugeras Eds., pp 273-292, Boston, MA: Springer US: 2006 44 D.G Lowe "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol 60, no 2, pp 91-110 45 R Richa, R Linhares, E Comunello, et al "Fundus Image Mosaicking for Information Augmentation in Computer-Assisted Slit-Lamp Imaging," IEEE Transactions on Medical Imaging, vol 33, no 6, pp 1304-1312 46 H Bay, A Ess, T Tuytelaars, and L Van Gool "Speeded-Up Robust Features (SURF)," Computer Vision and Image Understanding, vol 110, no 3, pp 346-359 47 E Rublee, V Rabaud, K Konolige, and G Bradski, "ORB: An efficient alternative to SIFT or SURF," in Proceedings of the 2011 International Conference on Computer Vision, pp 2564-2571: IEEE Computer Society: 2011 48 S De Zanet, T Rudolph, R Richa, C Tappeiner, and R Sznitman "Retinal slit lamp video mosaicking," International Journal of Computer Assisted Radiology and Surgery, vol 11, no 6, pp 1035-1041 49 K Prokopetc and A Bartoli, "Reducing Drift in Mosaicing Slit-Lamp Retinal Images," in 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 533-540: 2016 50 K Prokopetc and A Bartoli "SLIM (slit lamp image mosaicing): handling reflection artifacts," International Journal of Computer Assisted Radiology and Surgery, vol 12, no 6, pp 911-920 51 D Ji, R Yang, L Zhang, B Wang, and X Chen, "The research of medical microscopic image mosaic based on the algorithm of surf," in 2013 10th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), pp 16-20: 2013 52 A Behrens, T Stehle, S Gross, and T Aach, "Local and global panoramic imaging for fluorescence bladder endoscopy," in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 6990-6993: 2009 53 A Behrens, M Bommes, S Gross, and T Aach, "Image quality assessment of endoscopic panorama images," in 2011 18th IEEE International Conference on Image Processing, pp 31133116: 2011 54 M Brown and D.G Lowe "Automatic Panoramic Image Stitching using Invariant Features," International Journal of Computer Vision, vol 74, no 1, pp 59-73 55 M.A Fischler and R.C Bolles "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun ACM, vol 24, no 6, pp 381395 56 V Kwatra, A Sch, #246, et al "Graphcut textures: image and video synthesis using graph cuts," ACM Trans Graph, vol 22, no 3, pp 277-286 57 R.Y Sinha, S.R Raje, and G.A Rao "Three-dimensional laparoscopy: Principles and practice," Journal of minimal access surgery, vol 13, no 3, pp 165-169 58 A Kaehler and G Bradski, Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library O'Reilly Media, Inc., 2016 59 P Sturm, "Pinhole Camera Model," in Computer Vision: A Reference Guide, K Ikeuchi Ed., pp 610-613, Boston, MA: Springer US: 2014 82 FCU e-Theses & Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery 60 Z Zhang "A Flexible New Technique for Camera Calibration," IEEE Trans Pattern Anal Mach Intell, vol 22, no 11, pp 1330-1334 61 K Konolige, "Small Vision Systems: Hardware and Implementation," pp 203-212, London: Springer London: 1998 62 J Ortiz, H Calderón, and J Fontaine, "Disparity map computation on scalable computing," in Proceedings of 2010 IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, pp 301-306: 2010 63 D Min, S Choi, J Lu, B Ham, K Sohn, and M.N Do "Fast Global Image Smoothing Based on Weighted Least Squares," IEEE Transactions on Image Processing, vol 23, no 12, pp 56385653 64 J.T Barron and B Poole, "The fast bilateral solver," in European Conference on Computer Vision, pp 617-632: Springer: 2016 65 Y Yamauchi, J Yamashita, Y Fukui, et al., "A dual-view endoscope with image shift," in CARS 2002 Computer Assisted Radiology and Surgery, pp 183-187: Springer: 2002 66 P Roulet, P Konen, M Villegas, S Thibault, and P.Y Garneau, "360 endoscopy using panomorph lens technology," in Endoscopic Microscopy V, p 75580T: International Society for Optics and Photonics: 2010 67 S.-M Tseng, J.-C Yu, Y.-T Hsu, C.-W Huang, Y.-F Tsai, and W.-C Cheng "Panoramic endoscope based on convex parabolic mirrors," Optical Engineering, vol 57, no 3, p 033102 68 C Takada, T Suzuki, A Afifi, and T Nakaguchi, "Hybrid Tracking and Matching Algorithm for Mosaicking Multiple Surgical Views," pp 24-35, Cham: Springer International Publishing: 2017 69 C Takada, A Afifi, T Suzuki, and T Nakaguchi, "An enhanced hybrid tracking-mosaicking approach for surgical view expansion," in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 3692-3695: 2017 70 R Hartley and A Zisserman, Multiple View Geometry in Computer Vision Cambridge University Press, 2003 71 D.T Kim, "Comparison-0 for Panoramic Video," https://youtu.be/audXZXKdJco 72 D.T Kim, "Comparison-1 for Panoramic Video," https://youtu.be/XCEq_bceufs 73 D.T Kim, "Comparison-2 for Panoramic Video," https://youtu.be/4_FFcJGUCIo 74 J Redmon and A Farhadi "Yolov3: An incremental improvement," arXiv preprint arXiv:180402767 75 J Redmon and A Farhadi, "YOLO9000: Better, Faster, Stronger," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6517-6525: 2017 76 J Redmon, S Divvala, R Girshick, and A Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 779-788: 2016 77 K He, G Gkioxari, P Dollár, and R Girshick, "Mask R-CNN," in 2017 IEEE International Conference on Computer Vision (ICCV), pp 2980-2988: 2017 78 J Redmon, "Darknet: Open Source Neural Networks in C 2013–2016," https://pjreddie.com/darknet/ 79 A.G.a.A.Z Abhishek Dutta, "VGG Image Annotator (VIA) " http://www.robots.ox.ac.uk/~vgg/software/via/ 80 M Everingham, L Van Gool, C.K Williams, J Winn, and A Zisserman "The pascal visual object classes (voc) challenge," International Journal of Computer Vision, vol 88, no 2, pp 303338 81 S Ren, K He, R Girshick, and J Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, pp 91-99: 2015 82 W Abdulla, "Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow," https://github.com/matterport/Mask_RCNN 83 FCU e-Theses & Dissertations (2020) ... Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Figure 3.13: The qualitative evaluation results of the 3D reconstruction for four datasets... Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Chapter Image Stitching in Minimally Invasive Surgery 2.1 Introduction MIS is a highly... Dissertations (2020) A 3D Panoramic Image Guiding System Design and Implementation for Minimum Invasive Surgery Image registration Stitching result Input frames Warp frames Find seam mask & Blend

Ngày đăng: 15/02/2021, 09:10

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN