Novel hybrid three sigma (3∑w.i.s.d.o.m) approach: deep multimodal fusion for smart city applications toward wisdom

Authors

  • FATHI F Fadwa FATHI RITM, Research Lab ESTC Hassan II University Casablanca, Morocco
  • OUZZIF M Mohammed OUZZIF, RITM Research Lab ESTC Hassan II University Casablanca, Morocco
  • ABGHOUR N Norddine ABGHOUR, LIMSAD Research Lab Faculty of Sciences, Hassan II University of Casablanca, Morocco

Keywords:

Big data, Deep learning, Deep Multimodal Fusion, intelligence, wisdom component

Abstract

In big data era, data shows characters of large volume and velocity, especially variety that is also called heterogeneity, which is the generated datasets from various city domains. Recently, modeling heterogeneous data sources has gathered significant interest especially with the power of artificial intelligence. AI and big data, as it is the case with every tool, are used for good and bad whereas in reality we do not need just intelligence. We need wisdom to create a new powerful and complete image. Our concept is based on the inspiration from human being that every system is like a human being with five senses and the intuition or the sixth sense will be the result of the fusion of all other senses to pave the way to wisdom. In this paper we will showcase how diversity and heterogeneity are key to Fusion for better behaviour and Decision which is the area of the study of many domains like Healthcare, Self-driving cars, and smart Recruitment. Then, we will propose our novel hybrid three Sigma (3∑ w.i.s.d.o.m ) approach, Deep Multimodal Fusion for smart city applications toward wisdom.

References

L. Zhang, Y. Xie, L. Xidao, et X. Zhang, « Multi-source heterogeneous data fusion », in 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, mai 2018, p. 47‑51, doi: 10.1109/ICAIBD.2018.8396165.

« Artificial Intelligence and Big Data: A Perfect Match - DZone AI », dzone.com. https://dzone.com/articles/artificial-intelligenceand-big-data-a-perfect-mat (Acessed déc. 03, 2018).

« The Difference Between (Artificial) Intelligence and Wisdom | LinkedIn ». https://www.linkedin.com/pulse/difference-betweenartificial-intelligence-wisdom-stanley-bergman/ (Accessed déc. 03, 2018).

F. Fathi, N. Abghour, et M. Ouzzif, « From Big Data Platforms to Smarter Solution, with Intelligent Learning: [PAV] 4 - Pave the Way for Intelligence », in Proceedings of the 2017 International Conference on Cloud and Big Data Computing, New York, NY, USA, 2017, p. 11–16, doi: 10.1145/3141128.3141143.

L. Wang, « Heterogeneous Data and Big Data Analytics », Automatic Control and Information Sciences, vol. 3, no 1, p. 8‑15, août 2017, doi: 10.12691/acis-3-1-3.

H. Hu, Y. Wen, T.-S. Chua, et X. Li, « Toward Scalable Systems for Big Data Analytics: A Technology Tutorial », IEEE Access, vol. 2, p. 652‑687, 2014, doi: 10.1109/ACCESS.2014.2332453.

A. Gandomi et M. Haider, « Beyond the hype: Big data concepts, methods, and analytics », International Journal of Information Management, vol. 35, no 2, p. 137‑144, avr. 2015, doi: 10.1016/j.ijinfomgt.2014.10.007.

Q. Zhang, L. T. Yang, Z. Chen, et P. Li, « A survey on deep learning for big data », Information Fusion, vol. 42, p. 146‑157, juill. 2018, doi: 10.1016/j.inffus.2017.10.006.

P. K. Atrey, M. A. Hossain, A. El Saddik, et M. S. Kankanhalli, « Multimodal Fusion for Multimedia Analysis: A Survey », Multimedia Syst., vol. 16, no 6, p. 345–379, nov. 2010, doi: 10.1007/s00530-010-0182-0.

O. Cameron, « We’re Building an Open Source Self-Driving Car », Udacity Inc, sept. 29, 2016. https://medium.com/udacity/werebuilding-an-open-source-self-driving-car-ac3e973cd163#.fb7vtlrfn (Accessed june 20, 2018).

« Teaching a car to drive using Deep Learning | LinkedIn ». https://www.linkedin.com/pulse/teaching-car-how-drive-using-deeplearning-muhieddine-el-kaissi/ (Accessed june 20, 2018).

S. Raval, How_to_simulate_a_self_driving_car: This is the code for « How to Simulate a Self-Driving Car » by Siraj Raval on Youtube.

SavitribaiPhule Pune University, S. G Panchal, et R. S. Apare, « Real Time Traffic Detection using Twitter Tweets Analysis », International Journal of Engineering Trends and Technology, vol. 47, no 8, p. 458‑461, may 2017.

« How Sensor Fusion Works for Self-Driving Cars | LinkedIn ». https://www.linkedin.com/pulse/how-sensor-fusion-works-selfdriving-cars-david-silver/ (Accessed june 21, 2018).

« Smart Interviews: AI-Powered Recruitment - DZone AI ». https://dzone.com/articles/smart-interview-a-new-way-forrecruiting-candidate (Accessed jan. 10, 2019).

F. Fathi, N. Abghour, et M. Ouzzif, « From Big Data to Better Behavior in Self-Driving Cars », in Proceedings of the 2018 2Nd International Conference on Cloud and Big Data Computing, New York, NY, USA, 2018, p. 42–46, doi: 10.1145/3264560.3264572.

S. Poria, E. Cambria, R. Bajpai, et A. Hussain, « A review of affective computing: From unimodal analysis to multimodal fusion », Information Fusion, vol. 37, p. 98‑125, sept. 2017, doi: 10.1016/j.inffus.2017.02.003.

D. Lahat, T. Adali, et C. Jutten, « Multimodal Data Fusion: An Overview of Methods, Challenges, and Prospects », Proceedings of the IEEE, vol. 103, no 9, p. 1449‑1477, sept. 2015.

F. Wang, L. Hu, J. Zhou, J. Hu, et K. Zhao, « A Semantics-based Approach to Multi-source Heterogeneous Information Fusion in the Internet of Things », Soft Comput., vol. 21, no 8, p. 2005–2013, avr. 2017, doi: 10.1007/s00500-015-1899-7.

L. Zhang et al., « Multimodal Fusion for Cognitive Load Measurement in an Adaptive Virtual Reality Driving Task for Autism Intervention », in Universal Access in Human-Computer Interaction. Access to Learning, Health and Well-Being, Cham, 2015, p. 709‑720, doi: 10.1007/978-3-319-20684-4_68.

S. K. D’mello et J. Kory, « A Review and Meta-Analysis of Multimodal Affect Detection Systems », ACM Comput. Surv., vol. 47, no 3, p. 43:1–43:36, févr. 2015, doi: 10.1145/2682899.

Y. Zheng, « Methodologies for Cross-Domain Data Fusion: An Overview », IEEE Transactions on Big Data, vol. 1, no 1, p. 16‑34, mars 2015, doi: 10.1109/TBDATA.2015.2465959.

C. T. Duong, R. Lebret, et K. Aberer, « Multimodal Classification for Analysing Social Media », arXiv:1708.02099 [cs], août 2017, Consulté le: déc. 15, 2018. [En ligne]. Disponible sur: http://arxiv.org/abs/1708.02099.

V. Vielzeuf, A. Lechervy, S. Pateux, et F. Jurie, « CentralNet: A Multilayer Approach for Multimodal Fusion », in Computer Vision – ECCV 2018 Workshops, vol. 11134, L. Leal-Taixé et S. Roth, Éd. Cham: Springer International Publishing, 2019, p. 575‑589.

K. Liu, Y. Li, N. Xu, et P. Natarajan, « Learn to Combine Modalities in Multimodal Deep Learning », arXiv:1805.11730 [cs, stat], mai 2018, Consulté le: sept. 10, 2019. [En ligne]. Disponible sur: http://arxiv.org/abs/1805.11730.

E. F. Nakamura, A. A. F. Loureiro, et A. C. Frery, « Information fusion for wireless sensor networks: Methods, models, and classifications », ACM Comput. Surv., vol. 39, no 3, p. 9–es, sept. 2007, doi: 10.1145/1267070.1267073.

J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, et A. Y. Ng, « Multimodal deep learning », in Proceedings of the 28th International Conference on International Conference on Machine Learning, Bellevue, Washington, USA, juin 2011, p. 689–696, Consulté le: mars 09, 2020. [En ligne].

J.-M. Pérez-Rúa, V. Vielzeuf, S. Pateux, M. Baccouche, et F. Jurie, « MFAS: Multimodal Fusion Architecture Search », arXiv:1903.06496 [cs], mars 2019, Consulté le: sept. 17, 2019. [En ligne]. Disponible sur: http://arxiv.org/abs/1903.06496.

Y. Niu, Z. Lu, J.-R. Wen, T. Xiang, et S.-F. Chang, « Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation », arXiv:1709.01220 [cs], sept. 2017, Consulté le: sept. 10, 2019. [En ligne]. Disponible sur: http://arxiv.org/abs/1709.01220.

A. Zadeh, M. Chen, S. Poria, E. Cambria, et L.-P. Morency, « Tensor Fusion Network for Multimodal Sentiment Analysis », arXiv:1707.07250 [cs], juill. 2017, Consulté le: janv. 20, 2019. [En ligne]. Disponible sur: http://arxiv.org/abs/1707.07250.

J. Williams, R. Comanescu, O. Radu, et L. Tian, « DNN Multimodal Fusion Techniques for Predicting Video Sentiment », in Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), Melbourne, Australia, 2018, p. 64‑72, doi: 10.18653/v1/W18-3309.

D. Lahat, T. Adali, et C. Jutten, « Multimodal Data Fusion: An Overview of Methods, Challenges, and Prospects », Proc. IEEE, vol. 103, no 9, p. 1449‑1477, sept. 2015, doi: 10.1109/JPROC.2015.2460697.

K. P. Seng, L. Ang, A. W.-C. Liew, et J. Gao, « Multimodal Information Processing and Big Data Analytics in a Digital World », in Multimodal Analytics for Next-Generation Big Data Technologies and Applications, K. P. Seng, L. Ang, A. W.-C. Liew, et J. Gao, Éd. Cham: Springer International Publishing, 2019, p. 3‑9.

V. Radu et al., « Multimodal Deep Learning for Activity and Context Recognition », Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 1, no 4, p. 1‑27, janv. 2018, doi: 10.1145/3161174.

F. Huang, X. Zhang, Z. Li, T. Mei, Y. He, et Z. Zhao, « Learning Social Image Embedding with Deep Multimodal Attention Networks », Proceedings of the on Thematic Workshops of ACM Multimedia 2017 - Thematic Workshops ’17, p. 460‑468, 2017.

H. V. Le, T. Murata, et M. Iguchi, « Deep Modular Multimodal Fusion on Multiple Sensors for Volcano Activity Recognition », in Machine Learning and Knowledge Discovery in Databases, vol. 11053, U. Brefeld, E. Curry, E. Daly, B. MacNamee, A. Marascu, F. Pinelli, M. Berlingerio, et N. Hurley, Éd. Cham: Springer International Publishing, 2019, p. 602‑617.

Downloads

Published

2024-02-26

How to Cite

FATHI, F., OUZZIF, M., & ABGHOUR, N. (2024). Novel hybrid three sigma (3∑w.i.s.d.o.m) approach: deep multimodal fusion for smart city applications toward wisdom. COMPUSOFT: An International Journal of Advanced Computer Technology, 9(06), 3714–3724. Retrieved from https://ijact.in/index.php/j/article/view/574

Issue

Section

Review Article

Similar Articles

<< < 16 17 18 19 20 21 22 23 24 25 > >> 

You may also start an advanced similarity search for this article.