IJFANS International Journal of Food and Nutritional Sciences

ISSN PRINT 2319 1775 Online 2320-7876

UTILIZATION OF CONVOLUTIONAL NEURAL NETWORKS AND TRANSFER LEARNING FOR VISION-BASED HUMAN ACTIVITY RECOGNITION

Main Article Content

Kuruva Rahul , Guguloth Harshith, Gajula Vivek, Vangari Dinesh Kumar, P. Nirosha, Prof. D Venkatesh

Abstract

ABSTRACT The identification of human behavior is an important problem in a variety of domains, including health monitoring, human-computer interaction, and security surveillance, among others. Through the use of a Multiscale Convolutional Neural Network (MSCNN), this study presents a unique way to human behavior identification. The objective of this research is to improve the accuracy and resilience of behavior categorization based on video data. A number of different convolutional scales are included into the MSCNN model that has been suggested in order to extract certain spatial and temporal characteristics from video sequences. In order to properly identify complex and diverse human behaviors, which are often difficult to detect using standard approaches, the model processes video frames at multiple sizes. This allows the model to efficiently recognize these behaviors. The network is able to concentrate on both fine-grained features and more general contextual information because to the multiscale methodology, which ultimately results in significant improvements in recognition performance. A number of convolutional layers, each of which operates at a different resolution, make up the MSCNN architecture. In order to provide a full picture of human behaviors, these layers are intended to extract hierarchical elements, which are subsequently fused together. The model is trained on a huge dataset consisting of annotated video sequences, which allows it to learn and generalize across a wide variety of behavioral patterns and events. Through numerous studies on benchmark datasets, the usefulness of the MSCNN has been established. These trials have shown considerable gains in recognition accuracy when compared to other approaches that are currently in implementation. The findings provide information on the capability of the model to deal with a variety of obstacles, including occlusions, changes in appearance, and a variety of climatic variables.

Article Details