Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 10)

Mini-Tools

 
 

Search Report

  • 1. Snyder, Kristian Utilizing Convolutional Neural Networks for Specialized Activity Recognition: Classifying Lower Back Pain Risk Prediction During Manual Lifting

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Computer Science

    Classification of specialized human activity datasets utilizing methods not requiring manual feature extraction is an underserved area of research in the field of human activity recognition (HAR). In this thesis, we present a convolutional neural network (CNN)-based method to classify a dataset consisting of subjects lifting an object from various positions relative to their bodies, labeled by the level of back pain risk attributed to the action. Specific improvements over other CNN-based models for both general and activity-based purposes include the use of average pooling and dropout layers. Methods to reshape accelerometer and gyroscope sensor data are also presented to encourage the model's use with other datasets. When developing the model, a dataset previously developed by the National Institute for Occupational Safety and Health (NIOSH) was used. It consists of 720 total trials of accelerometer and gyroscope data from subjects lifting an object at various relative distances from the body. In testing, 90.6% accuracy was achieved on the NIOSH lifting dataset, a significant improvement over other models tested. Saliency results are also presented to investigate underlying feature extraction and justify the results collected.

    Committee: Rashmi Jha Ph.D. (Committee Chair); Ming-Lun Lu Ph.D. (Committee Member); Boyang Wang Ph.D. (Committee Member) Subjects: Artificial Intelligence
  • 2. Schneider, Bradley Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

    Doctor of Philosophy (PhD), Wright State University, 2022, Computer Science and Engineering PhD

    Activities of Daily Living (ADL's) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL's has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing 's based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems.

    Committee: Tanvi Banerjee Ph.D. (Advisor); Yong Pei Ph.D. (Committee Member); Michael Riley Ph.D. (Committee Member); Mateen Rizki Ph.D. (Committee Member); Thomas Wischgoll Ph.D. (Committee Member) Subjects: Computer Science
  • 3. Chakraborty, Shatakshi A study on context driven human activity recognition framework

    MS, University of Cincinnati, 2015, Engineering and Applied Science: Computer Science

    In recent world, human activity recognition has drawn much attention in the field of human computer interaction. There is a growing demand of activity recognition in different areas of everyday living, such as health-care systems like patient health monitoring, home-based rehabilitation, entertainment, and many more. In this research, we are aiming to use activity recognition theory in health-care system to monitor patient behavior during the waiting time at clinical visits. In today's health-care system, patients wait about 22 minutes on average in doctor's offices, and more than four hours in emergency departments. As wait time increases, patient satisfaction drops. With a growing consumer-mindedness of instant gratification or satisfaction, health care providers or hospitals are looking ways to improve productivity, like shortening each patient's path through the health care system, perhaps, adopting measures such as clinics using kiosks, and not reception desks, speedier check-in for returning patients, and taking measures to funnel visitors to the appropriate part of the clinic or hospital when appointments have been arranged earlier, while providing more attentive face-to-face care to those who are first timers to the system and in need. The purpose of this study is to investigate a computer-based means to obtain useful data on typical human behaviors during visits to clinics. A framework to implement the technology to study human behavior has been proposed by Tao Ma[15] recently. In his four-layer hierarchical framework, computer vision is used to study and understand human behavior through body movements. We explore a second framework developed by Saguna et al. [25] which uses probability theory and statistical learning methods to discover complex activity signatures. Additional modalities of information, such as speech, facial expressions, time-based contextual information can also be incorporated to interpret various human behaviors and elicit the cogn (open full item for complete abstract)

    Committee: Chia Han Ph.D. (Committee Chair); William Wee Ph.D. (Committee Member); Xuefu Zhou Ph.D. (Committee Member) Subjects: Computer Science
  • 4. Thomas, Brennan LSTM Neural Networks for Detection and Assessment of Back Pain Risk in Manual Lifting

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    Repetitive occupational lifting of objects has been shown to create an increased risk for incidence of back pain. Ergonomic workstations that promote proper lifting technique can reduce risk, but it is difficult to assess such workstations without constant risk monitoring. Inertial measurement units (IMU) such as accelerometers and gyroscopes have been used with success in human activity recognition (HAR) systems to determine when specified actions occur, but largely only for general activities for which it is easy to collect a significant amount of data. There has been considerably less work towards assessment of specialized tasks, such as lifting action. This work presents a dual system utilizing machine learning for detection of lifting action and assessment of the risks involved for that action. The proposed system achieves good performance in both the detection and assessment tasks using raw time-series IMU data with minimal preprocessing. Application of data augmentation provides additional increases in performance for the assessment task, and saliency mapping determines optimal sensor configurations for system modifications. The presented system can be used to monitor the risks involved with lifting action required in a workplace, guiding efforts to mitigate long-term risk.

    Committee: Rashmi Jha Ph.D. (Committee Chair); Ming-Lun Lu (Committee Member); Fred Annexstein Ph.D. (Committee Member) Subjects: Computer Science
  • 5. Maguire, Gabriel Remote Smoker Monitoring System Incorporating Preemptive Smoking Detection

    Master of Sciences (Engineering), Case Western Reserve University, 0, EECS - Computer Engineering

    Smoking cessation is a significant challenge for many people addicted to cigarettes and tobacco. Mobile health-related research into smoking cessation is primarily focused on mobile phone data collection either using self-reporting or sensor monitoring techniques. In the past 5 years with the increased popularity of smartwatch devices, research has been conducted to predict smoking movements associated with smoking behaviors based on accelerometer data analyzed from the internal sensors in a user's smartwatch. Previous smoking detection methods focused on classifying current user smoking behavior. For many users who are trying to quit smoking, this form of detection may be insufficient as the user has already relapsed. In this thesis, I present a smoking cessation system utilizing a smartwatch and finger sensor that is capable of detecting pre-smoking activities to discourage users from future smoking behavior. Pre-smoking activities include grabbing a pack of cigarettes or lighting a cigarette and these activities are often immediately succeeded by smoking. Therefore, through accurate detection of pre-smoking activities, we can alert the user before they have relapsed. Our smoking cessation system combines data from a smartwatch for gross accelerometer and gyroscope information and a wearable finger sensor for detailed finger bend-angle information. This thesis also touches on the design considerations of remote healthcare management and how the presented smoking cessation system can better fit into a remote smoker health monitoring framework.

    Committee: Ming-Chun Huang Ph.D (Committee Chair); Christos Papachristou Ph.D (Committee Member); Kenneth Loparo Ph.D (Committee Member) Subjects: Computer Engineering
  • 6. Ray, Sujan Dimensionality Reduction in Healthcare Data Analysis on Cloud Platform

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Computer Science and Engineering

    Nowadays, it is becoming very easy to have a huge collection of healthcare data, especially because of relatively cheap wearable devices. Subsequently, we can mine clinical data and acquire meaningful information. It helps in making better decisions and improve the healthcare sector by minimizing the costs. Healthcare datasets that are available in public domain have lots of features and it is manually impossible to identify the factors that contribute to the disease [1]. Therefore, it is necessary to use Machine Learning (ML) algorithms to identify the most important features that will help in finding out the occurrence of diseases from huge number of features. Thus, we could predict the disease more accurately with the model trained by only the top features of the dataset. Considering the fact that the healthcare data is coming from different sources with different sizes, there is a need for cloud-based platform. The first aim of this dissertation is to focus on the important field where big data is used for health care to diagnose diseases before they occur or to avoid them. Breast Cancer (BC) is the second most common cancer in women after skin cancer and has become a major health issue. As a result, it is very important to diagnose BC correctly and categorizing the tumors into malignant or benign groups. We know that ML techniques that have unique advantages and are widely used to analyze complex BC dataset and predict the disease. Wisconsin Diagnostic Breast Cancer (WDBC) dataset has been used to develop predictive models for BC by researchers in this field. In this dissertation, we propose a method for analyzing and predicting BC on the same dataset using Apache Spark. The experiments are executed on Hadoop cluster, a cloud platform provided by the Electrical Engineering and Computer Science (EECS) department at the University of Cincinnati. Our results show that selecting the right features significantly improves the accuracy in predicting BC. The s (open full item for complete abstract)

    Committee: Marc Cahay Ph.D. (Committee Chair); Dharma Agrawal D.Sc. (Committee Member); Rui Dai Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Science
  • 7. Bai, Yunhao WiFi-Based Driver Activity Recognition Using CSI Signal

    Doctor of Philosophy, The Ohio State University, 2020, Electrical and Computer Engineering

    With the development of intelligent transportation and Advance Driving Assistant System (ADAS), a vehicle has become more than a transportation tool nowadays, but an entire platform for different applications, including driving status monitoring, route guiding and entertainment. Among these applications, driver activity recognition plays an important role in both road safety and human-vehicle interactive perspectives. For example, if a fatigue related behavior such as yawning or nodding can be recognized beforehand, the vehicle warning system can be activated and possibly let the vehicle take over the driving task from the driver side. Until now, existing work on driver activity recognition focuses mostly on camera because it is cost-efficient and easy to be installed. However, a camera system has several key limitations, such as blurriness caused by bumpy roads and different brightness conditions, which make the recognition performance sensitive to road and weather conditions. On the other hand, wireless sensing based on WiFi signals has recently shown great promise in human gesture recognition, mainly because of its 1) non-intrusive nature, 2) high recognition accuracy, and 3) use of only commercial-off-the shelf devices. Recent studies have also successfully exploited the Channel State Information (CSI) of the WiFi wireless channel, which is traditionally used by the WiFi receiver for channel monitoring, to recognize human activities. In this dissertation, we try to use the CSI signal for driving activity recognition based on commodity WiFi devices, and propose different solutions to further overcome the limitations for the state-of-the-art WiFi-based recognition schemes. First, we propose WiDrive, a real-time in-car driver activity recognition system based on CSI changes of WiFi signals. We evaluate WiDrive in real cars, and show that WiDrive has an average recognition accuracy of 91.3%. Second, to overcome the performance degradation when there are pas (open full item for complete abstract)

    Committee: Xiaorui Wang Prof (Advisor); Haijun Su Prof (Committee Member); Dong Xuan Prof (Committee Member); Jay Kandampully Prof (Committee Member) Subjects: Electrical Engineering
  • 8. Angeleas, Anargyros A Multi-Formal Languages Collaborative Scheme for Complex Human Activity Recognition and Behavioral Patterns Extraction

    Doctor of Philosophy (PhD), Wright State University, 2018, Computer Science and Engineering PhD

    Human Activity Recognition is an actively researched domain for the past few decades, and is one of the most eminent applications of today. It is already part of our life, but due to high level of uncertainty and challenges of human detection, we have only application specific solutions. Thus, the problem being very demanding and still remains unsolved. Within this PhD we delve into the problem, and approach it from a variety of viewpoints. At start, we present and evaluate different architectures and frameworks for activity recognition. Henceforward, the focal point of our attention is automatic human activity recognition. We conducted and present a survey that compares, categorizes, and evaluates research surveys and reviews into four categories. Then a novel fully automatic view-independent multi-formal languages collaborative scheme is presented for complex activity and emotion recognition, which is the main contribution of this dissertation. We propose a collaborative three formal-languages, that is responsible for parsing manipulating, and understanding all the data needed. Artificial Neural Networks are used to classify an action primitive (simple activity), as well as to define change of activity. Finally, we capitalize the advantages of Fuzzy Cognitive Maps, and Rule-Based Colored Petri-Nets to be able to classify a sequence of activities as normal or ab-normal.

    Committee: Nikolaos Bourbakis Ph.D. (Advisor); Soon Chung Ph.D. (Committee Member); Mateen Rizki Ph.D. (Committee Member); George Tsihrintzis Ph.D. (Committee Member) Subjects: Computer Science
  • 9. Tsitsoulis, Athanasios A Methodology for Extracting Human Bodies from Still Images

    Doctor of Philosophy (PhD), Wright State University, 2013, Computer Science and Engineering PhD

    Monitoring and surveillance of humans is one of the most prominent applications of today and it is expected to be part of many future aspects of our life, for safety reasons, assisted living and many others. Many efforts have been made towards automatic and robust solutions, but the general problem is very challenging and remains still open. In this PhD dissertation we examine the problem from many perspectives. First, we study the performance of a hardware architecture designed for large-scale surveillance systems. Then, we focus on the general problem of human activity recognition, present an extensive survey of methodologies that deal with this subject and propose a maturity metric to evaluate them. One of the numerous and most popular algorithms for image processing found in the field is image segmentation and we propose a blind metric to evaluate their results regarding the activity at local regions. Finally, we propose a fully automatic system for segmenting and extracting human bodies from challenging single images, which is the main contribution of the dissertation. Our methodology is a novel bottom-up approach relying mostly on anthropometric constraints and is facilitated by our research in the fields of face, skin and hands detection. Experimental results and comparison with state-of-the-art methodologies demonstrate the success of our approach.

    Committee: Nikolaos Bourbakis Ph.D. (Advisor); Soon Chung Ph.D. (Committee Member); Yong Pei Ph.D. (Committee Member); Ioannis Hatziligeroudis Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science
  • 10. Zhang, Sen Smartphone Based Activity Recognition System

    Master of Science, The Ohio State University, 2012, Computer Science and Engineering

    Nowadays, obesity and overweight are epidemic worldwide. Obese people are at risk for diseases such as heart disease, type 2 diabetes, stroke, and certain types of cancer. Finding out an effective treatment for obesity becomes a major challenge for researchers. Physical activity plays a vital role in treating obesity, as the energy consumed during such activity helps “burn off” excess body fat. With the rapid development of smartphone technology, we realized that prolific mobile phones can potentially monitor physical activity thanks to their motion sensors, e.g., accelerometers, orientation sensors, and gyroscopes. In this paper, we design and implement an activity recognition system using mobile phones. Unlike prior work, our system uses acceleration, orientation, and gyroscope data for on-line activity recognition. Our system samples phone sensor data at 5 Hz and uploads them to a central server for processing. Our preliminary experiments indicate our system's potential for activity recognition.

    Committee: Dong Xuan (Advisor); Feng Qin (Committee Member) Subjects: Computer Science