User:MichelangeloGuaitolini/sandbox

Centralized versus decentralized edit

In sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In centralized fusion, the clients simply forward all of the data to a central location, and some entity at the central location is responsible for correlating and fusing the data. In decentralized, the clients take full responsibility for fusing the data. "In this case, every sensor or platform can be viewed as an intelligent asset having some degree of autonomy in decision-making."[1]

Multiple combinations of centralized and decentralized systems exist.

Another classification of sensor configuration refers to the coordination of information flow between sensors [2]. These mechanisms provide a way to resolve conflicts or disagreements and to allow the development of dynamic sensing strategies. Sensors are in redundant (or competitive) configuration if each node delivers independent measures of the same properties. This configuration can be used in error correction when comparing information from multiple nodes. Redundant strategies are often used with high level fusions in voting procedures. [3] [4] Complementary configuration occurs when multiple information sources supply different information about the same features. This strategy is used for fusing information at raw data level within decision-making algorithms. Complementary features are typically applied in motion recognition tasks with Neural network [5],[6], Hidden Markov model[7] [8], Support-vector machine [9], clustering methods and other techniques [9] [8]. Cooperative sensor fusion uses the information extracted by multiple independent sensors to provide information that would not be available from single sensors. For example, sensors connected to body segments are used for the detection of the angle between them. Cooperative sensor strategy gives information impossible to obtain from single nodes. Cooperative information fusion can be used in motion recognition[10], gait analysis, motion analysis [11],[12],[13].

Levels edit

There are several categories or levels of sensor fusion that are commonly used.* [14] [15] [16] [17] [18] [19]

  • Level 0 – Data alignment
  • Level 1 – Entity assessment (e.g. signal/feature/object).
    • Tracking and object detection/recognition/identification
  • Level 2 – Situation assessment
  • Level 3 – Impact assessment
  • Level 4 – Process refinement (i.e. sensor management)
  • Level 5 – User refinement

Sensor fusion level can also be defined basing on the kind of information used to feed the fusion algorithm [gravina 2017]. More precisely, sensor fusion can be performed fusin raw data coming from different sources, extrapolated features or even decision made by single nodes.

  • Data level - data level (or early) fusion aims to fuse raw data from multiple sources and represent the fusion technique at the lowest level of abstraction. It is the most common sensor fusion technique in many fields of application. Data level fusion algorithms usually aim to to combine multiple homogeneous sources of sensory data to achieve more accurate and synthetic readings [20]. When portable devices are employed data compression represent an important factor, since collecting raw information from multiple sources generates huge information spaces that could define an issue in terms of memory or communication bandwidth for portable systems. It should be noted that data level information fusion tends to generate big input spaces, that slow down the decision-making procedure. Also, data level fusion often cannot handle incomplete measurements. If one sensor modality becomes useless due to malfunctions, breakdown or other reasons the whole systems could occur in ambiguous outcomes.
  • Feature level - features represent information computed onboard by each sensing node. These features are then sent to a fusion node to feed the fusion algorithm [21]. This procedure generates smaller information spaces with respet to the data level fusion, and this is better in terms of computational load. Obviously, it is important to properly select features on which define classification procedures: choosing the most efficient features set should be a main aspect in method design. Using features selection algorithms that properly detect correlated features and features subsets improving the recognition accuracy. However, usually large training sets are required to find the most significan feature subset.
  • Decision level - decision level (or late) fusion is the procedure of selecting an hypothesis from a set of hypotheses generated by individual (usually weaker) decisions of multiple nodes [22]

. It is the highes level of abstraction and uses the information that has been already elaborated through preliminary data- or feature level processing. The main goal in decision fusion is to use meta-level classifier while data from nodes are preprocessed by extracting features from them [23] . Typically decision level sensor fusion is used in classification an recognition activities and the two most common approaches are majority voting and Naive-Bayes [24] . Advantages coming from decision level fusion include communication bandwidth and improved decision accuracy. It also allows the combination of heterogeneous sensors [21].



Homework:

Passive prostheses show better aesthetic features than those of mechanically powered devices.[25] They do not help the user as they don’t have active component. For this reason, they normally imply abnormal biomechanics and require more metabolic energy to walk at the same velocity as non-amputees. Variable-damping prostheses adapt to different modes of gait modulating their damping level through a shared control between the user and a smart algorithm. They guarantee a better stability and adaptation to different ground surfaces [26]. Quasi-passive prostheses are recently becoming more popular including commercially available devices.[27]. Active prostheses can change their dynamic depending on the activity performed [28] [29] [30]. These solutions involve the use of various type of sensors and actuation units.[27] They usually exploit data from sensors like EMG electrodes, accelerometers and gyroscopes[28]. Recent active prostheses embed classification algorithms that use machine learning techniques for the prediction of users' locomotion intention in order to adapt prosthetics actuation to the subject' biomechanics [31] [32].

  1. ^ N. Xiong; P. Svensson (2002). "Multi-sensor management for information fusion: issues and approaches". Information Fusion. p. 3(2):163–186.
  2. ^ Durrant-Whyte, Hugh F. (2016). "Sensor Models and Multisensor Integration". The International Journal of Robotics Research. 7 (6): 97–113. doi:10.1177/027836498800700608. ISSN 0278-3649.
  3. ^ Li, Wenfeng; Bao, Junrong; Fu, Xiuwen; Fortino, Giancarlo; Galzarano, Stefano (2012). "Human Postures Recognition Based on D-S Evidence Theory and Multi-sensor Data Fusion": 912–917. doi:10.1109/CCGrid.2012.144. {{cite journal}}: Cite journal requires |journal= (help)
  4. ^ Fortino, Giancarlo; Gravina, Raffaele (2015). "Fall-MobileGuard: a Smart Real-Time Fall Detection System". doi:10.4108/eai.28-9-2015.2261462. {{cite journal}}: Cite journal requires |journal= (help)
  5. ^ Tao, Shuai; Zhang, Xiaowei; Cai, Huaying; Lv, Zeping; Hu, Caiyou; Xie, Haiqun (2018). "Gait based biometric personal authentication by using MEMS inertial sensors". Journal of Ambient Intelligence and Humanized Computing. 9 (5): 1705–1712. doi:10.1007/s12652-018-0880-6. ISSN 1868-5137.
  6. ^ Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar (2017). "IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion". Sensors. 17 (12): 2735. doi:10.3390/s17122735. ISSN 1424-8220.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  7. ^ Guenterberg, E.; Yang, A.Y.; Ghasemzadeh, H.; Jafari, R.; Bajcsy, R.; Sastry, S.S. (2009). "A Method for Extracting Temporal Parameters Based on Hidden Markov Models in Body Sensor Networks With Inertial Sensors". IEEE Transactions on Information Technology in Biomedicine. 13 (6): 1019–1030. doi:10.1109/TITB.2009.2028421. ISSN 1089-7771.
  8. ^ a b Parisi, Federico; Ferrari, Gianluigi; Giuberti, Matteo; Contin, Laura; Cimolin, Veronica; Azzaro, Corrado; Albani, Giovanni; Mauro, Alessandro (2016). "Inertial BSN-Based Characterization and Automatic UPDRS Evaluation of the Gait Task of Parkinsonians". IEEE Transactions on Affective Computing. 7 (3): 258–271. doi:10.1109/TAFFC.2016.2549533. ISSN 1949-3045.
  9. ^ a b Gao, Lei; Bourke, A.K.; Nelson, John (2014). "Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems". Medical Engineering & Physics. 36 (6): 779–785. doi:10.1016/j.medengphy.2014.02.012. ISSN 1350-4533.
  10. ^ Xu, James Y.; Wang, Yan; Barrett, Mick; Dobkin, Bruce; Pottie, Greg J.; Kaiser, William J. (2016). "Personalized Multilayer Daily Life Profiling Through Context Enabled Activity Classification and Motion Reconstruction: An Integrated System Approach". IEEE Journal of Biomedical and Health Informatics. 20 (1): 177–188. doi:10.1109/JBHI.2014.2385694. ISSN 2168-2194.
  11. ^ Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona (2015). "A Novel Adaptive, Real-Time Algorithm to Detect Gait Events From Wearable Sensors". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 23 (3): 413–422. doi:10.1109/TNSRE.2014.2337914. ISSN 1534-4320.
  12. ^ Wang, Zhelong; Qiu, Sen; Cao, Zhongkai; Jiang, Ming (2013). "Quantitative assessment of dual gait analysis based on inertial sensors with body sensor network". Sensor Review. 33 (1): 48–56. doi:10.1108/02602281311294342. ISSN 0260-2288.
  13. ^ Kong, Weisheng; Wanning, Lauren; Sessa, Salvatore; Zecca, Massimiliano; Magistro, Daniele; Takeuchi, Hikaru; Kawashima, Ryuta; Takanishi, Atsuo (2017). "Step Sequence and Direction Detection of Four Square Step Test". IEEE Robotics and Automation Letters. 2 (4): 2194–2200. doi:10.1109/LRA.2017.2723929. ISSN 2377-3766.
  14. ^ Rethinking JDL Data Fusion Levels
  15. ^ Blasch, E., Plano, S. (2003) “Level 5: User Refinement to aid the Fusion Process”, Proceedings of the SPIE, Vol. 5099.
  16. ^ J. Llinas; C. Bowman; G. Rogova; A. Steinberg; E. Waltz; F. White (2004). Revisiting the JDL data fusion model II. International Conference on Information Fusion. CiteSeerX 10.1.1.58.2996.
  17. ^ Blasch, E. (2006) "Sensor, user, mission (SUM) resource management and their interaction with level 2/3 fusion[permanent dead link]" International Conference on Information Fusion.
  18. ^ http://defensesystems.com/articles/2009/09/02/c4isr1-sensor-fusion.aspx
  19. ^ Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F. (2013) "Revisiting the JDL model for information Exploitation," International Conference on Information Fusion.
  20. ^ Gao, Teng; Song, Jin-Yan; Zou, Ji-Yan; Ding, Jin-Hua; Wang, De-Quan; Jin, Ren-Cheng (2015). "An overview of performance trade-off mechanisms in routing protocol for green wireless sensor networks". Wireless Networks. 22 (1): 135–157. doi:10.1007/s11276-015-0960-x. ISSN 1022-0038.
  21. ^ a b Chen, Chen; Jafari, Roozbeh; Kehtarnavaz, Nasser (2015). "A survey of depth and inertial sensor fusion for human action recognition". Multimedia Tools and Applications. 76 (3): 4405–4425. doi:10.1007/s11042-015-3177-1. ISSN 1380-7501.
  22. ^ Banovic, Nikola; Buzali, Tofi; Chevalier, Fanny; Mankoff, Jennifer; Dey, Anind K. (2016). "Modeling and Understanding Human Routine Behavior": 248–260. doi:10.1145/2858036.2858557. {{cite journal}}: Cite journal requires |journal= (help)
  23. ^ Maria, Aileni Raluca; Sever, Pasca; Carlos, Valderrama (2015). "Biomedical sensors data fusion algorithm for enhancing the efficiency of fault-tolerant systems in case of wearable electronics device": 1–4. doi:10.1109/ROLCG.2015.7367228. {{cite journal}}: Cite journal requires |journal= (help)
  24. ^ Bahrepour, Majid; Meratnia, Nirvana; Taghikhaki, Zahra; M. Having, Paul J. (2011). "Sensor Fusion-Based Activity Recognition for Parkinson Patients". doi:10.5772/16646. {{cite journal}}: Cite journal requires |journal= (help)
  25. ^ Chapman, Michael W.; James, Michelle A (2019). Chapman's Comprehensive Orthopaedic Surgery: Four Volume Set (in Italian). JP Medical Ltd. p. 5375.
  26. ^ Johansson, Jennifer L.; Sherrill, Delsey M.; Riley, Patrick O.; Bonato, Paolo; Herr, Hugh (2005). "A Clinical Comparison of Variable-Damping and Mechanically Passive Prosthetic Knee Devices". American Journal of Physical Medicine & Rehabilitation. 84 (8): 563–575. doi:10.1097/01.phm.0000174665.74933.0b. ISSN 0894-9115.
  27. ^ a b Lara-Barrios, Carlos M.; Blanco-Ortega, Andrés; Guzmán-Valdivia, Cesar H.; Bustamante Valles, Karla D. (2017). "Literature review and current trends on transfemoral powered prosthetics". Advanced Robotics. 32 (2): 51–62. doi:10.1080/01691864.2017.1402704. ISSN 0169-1864.
  28. ^ a b Windrich, Michael; Grimmer, Martin; Christ, Oliver; Rinderknecht, Stephan; Beckerle, Philipp (2016). "Active lower limb prosthetics: a systematic review of design issues and solutions". BioMedical Engineering OnLine. 15 (S3). doi:10.1186/s12938-016-0284-9. ISSN 1475-925X.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  29. ^ Herr, H. M.; Grabowski, A. M. (2011). "Bionic ankle-foot prosthesis normalizes walking gait for persons with leg amputation". Proceedings of the Royal Society B: Biological Sciences. 279 (1728): 457–464. doi:10.1098/rspb.2011.1194. ISSN 0962-8452.
  30. ^ El-Sayed, Amr M.; Hamzaid, Nur Azah; Abu Osman, Noor Azuan (2014). "Technology Efficacy in Active Prosthetic Knees for Transfemoral Amputees: A Quantitative Evaluation". The Scientific World Journal. 2014: 1–17. doi:10.1155/2014/297431. ISSN 2356-6140.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  31. ^ Li, Chuanjiang; Ren, Jian; Huang, Huaiqi; Wang, Bin; Zhu, Yanfei; Hu, Huosheng (2018). "PCA and deep learning based myoelectric grasping control of a prosthetic hand". BioMedical Engineering OnLine. 17 (1). doi:10.1186/s12938-018-0539-8. ISSN 1475-925X.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  32. ^ Atzori, Manfredo; Cognolato, Matteo; Müller, Henning (2016). "Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands". Frontiers in Neurorobotics. 10. doi:10.3389/fnbot.2016.00009. ISSN 1662-5218.{{cite journal}}: CS1 maint: unflagged free DOI (link)