Data Augmentation Technique to Expand Road Dataset Using Mask RCNN and Image Inpainting
A popular method for training a machine learning model is to use a data-driven approach. This research contributes to expanding the dataset of urban road images without vehicles. Using this method will benefit a variety of existing and new research projects that require an empty road to train their data-driven model. To achieve the desired result, the method combines image segmentation and image inpainting. The model detects the vehicle with Mask RCNN and removes the detected object with image inpainting. Morphological transformation was used to improve the method's efficiency. Using dilation operation of morphological transformation, the mask generated from mask RCNN is enlarged. The results of the experiment support the method's efficacy.
 H. Ng and S. Winkler, "A data-driven approach to cleaning large face datasets," 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 343-347, doi: 10.1109/ICIP.2014.7025068.
 A. Z. M. Tahmidul Kabir, A. M. Mizan, N. Debnath, A. J. Ta-sin, N. Zinnurayen and M. T. Haider, "IoT Based Low Cost Smart Indoor Farming Management System Using an Assistant Robot and Mobile App," 2020 10th Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS), Malang, Indonesia, 2020, pp. 155-158, doi: 10.1109/EECCIS49483.2020.9263478.
 A. M. Mizan, A. Z. M. Tahmidul Kabir, N. Zinnurayen, T. Abrar, A. J. Ta-sin and Mahfuzar, "The Smart Vehicle Management System for Accident Prevention by Using Drowsiness, Alcohol, and Overload Detection," 2020 10th Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS), Malang, Indonesia, 2020, pp. 173-177, doi: 10.1109/EECCIS49483.2020.9263429.
 M. E. Raihan, U. Rafin Akther, S. Afrin, F. M. Chowdhury and M. Rawnak Sarker, "Toddlers Working Memory Development via Visual Attention and Visual Sequential-Memory," 2019 22nd International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, 2019, pp. 1-6, doi: 10.1109/ICCIT48885.2019.9038580.
 B. Chen, W. Sun and A. Vodacek, "Improving image-based characterization of road junctions, widths, and connectivity by leveraging OpenStreetMap vector map," 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 2014, pp. 4958-4961, doi: 10.1109/IGARSS.2014.6947608.
 Chen, F. Huang and Y. Fang, "Integrating virtual environment and GIS for 3D virtual city development and urban planning," 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 2011, pp. 4200-4203, doi: 10.1109/IGARSS.2011.6050156.
 J. Kim, J. Yoo and J. Koo, "Road and Lane Detection Using Stereo Camera," 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Shanghai, China, 2018, pp. 649-652, doi: 10.1109/BigComp.2018.00117.
 M. C. Olgun, Z. Baytar, K. M. Akpolat and O. Koray Sahingoz, "Autonomous Vehicle Control for Lane and Vehicle Tracking by Using Deep Learning via Vision," 2018 6th International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 2018, pp. 1-7, doi: 10.1109/CEIT.2018.8751764.
 Lin TY. et al. (2014) Microsoft COCO: Common Objects in Context. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8693. Springer, Cham. https://doi.org/10.1007/978-3-319-10602-1_48.
 Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961-2969.
 Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. 2000. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH '00). ACM Press/Addison-Wesley Publishing Co., USA, 417–424. DOI:https://doi.org/10.1145/344779.344972.
 L. H. Jadhav and B. F. Momin, "Detection and identification of unattended/removed objects in video surveillance," 2016 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 2016, pp. 1770-1773, doi: 10.1109/RTEICT.2016.7808138.
 L. Chonglun, "Removing Natural Objects from the Sea Surface Background Image Based on Contour Map and Local-Hausdorff Distance," 2016 3rd International Conference on Information Science and Control Engineering (ICISCE), Beijing, China, 2016, pp. 519-526, doi: 10.1109/ICISCE.2016.118.
 M. Mahajan and P. Bhanodia, "Image inpainting techniques for removal of object," International Conference on Information Communication and Embedded Systems (ICICES2014), Chennai, India, 2014, pp. 1-4, doi: 10.1109/ICICES.2014.7034008.
 N. Shan, D. S. Tan, M. S. Denekew, Y. -Y. Chen, W. -H. Cheng and K. -L. Hua, "Photobomb Defusal Expert: Automatically Remove Distracting People From Photos," in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 4, no. 5, pp. 717-727, Oct. 2020, doi: 10.1109/TETCI.2018.2865215.
 J. Liu, S. Teng and Z. Li, "Removing Rain from Single Image Based on Details Preservation and Background Enhancement," 2019 IEEE 2nd International Conference on Information Communication and Signal Processing (ICICSP), Weihai, China, 2019, pp. 322-326, doi: 10.1109/ICICSP48821.2019.8958586.
 J. Shi, Y. Zhou and W. X. Q. Zhang, "Target Detection Based on Improved Mask Rcnn in Service Robot," 2019 Chinese Control Conference (CCC), Guangzhou, China, 2019, pp. 8519-8524, doi: 10.23919/ChiCC.2019.8866278.
 A. Azeem, W. Riaz, A. Siddique and U. A. K. Saifullah, "A Robust Automatic Meter Reading System based on Mask-RCNN," 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA), Dalian, China, 2020, pp. 209-213, doi: 10.1109/AEECA49918.2020.9213531.
 Johnson J.W. (2020) Automatic Nucleus Segmentation with Mask-RCNN. In: Arai K., Kapoor S. (eds) Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing, vol 944. Springer, Cham. https://doi.org/10.1007/978-3-030-17798-0_32.
 Li X., Liu Q., Wang J., Wu J. (2020) Container Damage Identification Based on Fmask-RCNN. In: Zhang H., Zhang Z., Wu Z., Hao T. (eds) Neural Computing for Advanced Applications. NCAA 2020. Communications in Computer and Information Science, vol 1265. Springer, Singapore. https://doi.org/10.1007/978-981-15-7670-6_2.
 M. Songhui, S. Mingming and H. Chufeng, "Objects detection and location based on mask RCNN and stereo vision," 2019 14th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), Changsha, China, 2019, pp. 369-373, doi: 10.1109/ICEMI46757.2019.9101563.
 Y. Ren, X. Yu, R. Zhang, T. H. Li, S. Liu and G. Li, "StructureFlow: Image Inpainting via Structure-Aware Appearance Flow," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 181-190, doi: 10.1109/ICCV.2019.00027.
 D. J. Tuptewar and A. Pinjarkar, "Robust exemplar based image and video inpainting for object removal and region filling," 2017 International Conference on Intelligent Computing and Control (I2C2), Coimbatore, India, 2017, pp. 1-4, doi: 10.1109/I2C2.2017.8321964.
 Y. Wang, S. Liu, C. Chen and B. Zeng, "A Hierarchical Approach for Rain or Snow Removing in a Single Color Image," in IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3936-3950, Aug. 2017, doi: 10.1109/TIP.2017.2708502.
 A. Mikołajczyk and M. Grochowski, "Data augmentation for improving deep learning in image classification problem," 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnouście, Poland, 2018, pp. 117-122, doi: 10.1109/IIPHDW.2018.8388338.
 O. Bailo, D. Ham and Y. M. Shin, "Red Blood Cell Image Generation for Data Augmentation Using Conditional Generative Adversarial Networks," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 2019, pp. 1039-1048, doi: 10.1109/CVPRW.2019.00136.
 Gorad, B. & Kotrappa, D. S. Novel Dataset Generation for Indian Brinjal Plant Using Image Data Augmentation IOP Conference Series: Materials Science and Engineering, IOP Publishing, 2021, 1065, 012041.
 Pandey, S.; Singh, P. R. & Tian, J. An image augmentation approach using two-stage generative adversarial network for nuclei image segmentation Biomedical Signal Processing and Control, 2020, 57, 101782.
 Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and Flexible Image Augmentations. Information 2020, 11, 125. https://doi.org/10.3390/info11020125.
 N. Kalra and S. M. Paddock, “Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?” Transportation Research Part A: Policy and Practice, vol. 94, pp. 182– 193, 2016.
 C. Xie et al., "Image Inpainting With Learnable Bidirectional Attention Maps," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8857-8866, doi: 10.1109/ICCV.2019.00895.
 J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu and T. S. Huang, "Generative Image Inpainting with Contextual Attention," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505-5514, doi: 10.1109/CVPR.2018.00577.
 A. Mahmoud, S. Mohamed, R. El-Khoribi, and H. AbdelSalam, “Object Detection Using Adaptive Mask RCNN in Optical Remote Sensing Images,” International Journal of Intelligent Engineering and Systems, vol. 13, pp. 65–76, 2020.
 J. Harikrishnan, A. Sudarsan, A. Sadashiv and R. A. S. Ajai, "Vision-Face Recognition Attendance Monitoring System for Surveillance using Deep Learning Technology and Computer Vision," 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN), 2019, pp. 1-5, doi: 10.1109/ViTECoN.2019.8899418.
 Y. Wu, X. Peng, K. Ruan, and Z. Hu, “Improved image segmentation method based on morphological reconstruction,” Multimed. Tools Appl., vol. 76, no. 19, pp. 19781–19793, 2017.
To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to AJCT and must accompany any such material in order to be published by AJCT. Please read the form carefully.
The undersigned hereby assigns to the Asian Journal of Convergence in Technology Issues ("AJCT") all rights under copyright that may exist in and to the above Work, any revised or expanded derivative works submitted to AJCT by the undersigned based on the Work, and any associated written, audio and/or visual presentations or other enhancements accompanying the Work. The undersigned hereby warrants that the Work is original and that he/she is the author of the Work; to the extent the Work incorporates text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permission. See Retained Rights, below.
AJCT distributes its technical publications throughout the world and wants to ensure that the material submitted to its publications is properly available to the readership of those publications. Authors must ensure that The Work is their own and is original. It is the responsibility of the authors, not AJCT, to determine whether disclosure of their material requires the prior consent of other parties and, if so, to obtain it.
RETAINED RIGHTS/TERMS AND CONDITIONS
1. Authors/employers retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
2. Authors/employers may reproduce or authorize others to reproduce The Work and for the author's personal use or for company or organizational use, provided that the source and any AJCT copyright notice are indicated, the copies are not used in any way that implies AJCT endorsement of a product or service of any employer, and the copies themselves are not offered for sale.
3. Authors/employers may make limited distribution of all or portions of the Work prior to publication if they inform AJCT in advance of the nature and extent of such limited distribution.
4. For all uses not covered by items 2 and 3, authors/employers must request permission from AJCT.
5. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.
INFORMATION FOR AUTHORS
AJCT Copyright Ownership
It is the formal policy of AJCT to own the copyrights to all copyrightable material in its technical publications and to the individual contributions contained therein, in order to protect the interests of AJCT, its authors and their employers, and, at the same time, to facilitate the appropriate re-use of this material by others.
If you are employed and prepared the Work on a subject within the scope of your employment, the copyright in the Work belongs to your employer as a work-for-hire. In that case, AJCT assumes that when you sign this Form, you are authorized to do so by your employer and that your employer has consented to the transfer of copyright, to the representation and warranty of publication rights, and to all other terms and conditions of this Form. If such authorization and consent has not been given to you, an authorized representative of your employer should sign this Form as the Author.
AJCT requires that the consent of the first-named author and employer be sought as a condition to granting reprint or republication rights to others or for permitting use of a Work for promotion or marketing purposes.
1. The undersigned represents that he/she has the power and authority to make and execute this assignment.
2. The undersigned agrees to indemnify and hold harmless AJCT from any damage or expense that may arise in the event of a breach of any of the warranties set forth above.
3. In the event the above work is accepted and published by AJCT and consequently withdrawn by the author(s), the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to AJCT will be destroyed.
4. For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent
for the others.
Licenced by :