Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 30)

Mini-Tools

 
 

Search Report

  • 1. Rutherford, Sarah Business Environmental Design, Consumer Visual Literacy and Self-Concept

    MFA, Kent State University, 2012, College of Communication and Information / School of Visual Communication Design

    This research explores the hypothesis that the identity and environmental design of a business, whether created intentionally or not, attracts customers because it affirms some aspect of the customer's self-concept. Two online surveys featuring photo-simulated shopping experiences in eight shopping scenarios—grocery stores, book stores, shoe stores, bakeries, wine stores, coffee shops, sit-down restaurants, and clothing stores—were distributed to online survey participants to evaluate self-concept, purchasing behavior, the application of retail patron images, store choice, and consumer perception of brand personality. Although connections to self-conflict were not conclusive, the findings of this research imply that consumers make judgments about the quality and availability of merchandise and service based on the exterior of a business. Consumers are also able to convey whom they think shops at a given store, an additional motivator for patronage. The research displays that it is important for retailers to have an understanding of their target audience in order to connect with them, and that predictive value may lie in consumer preference for similar store types.

    Committee: Ken Visocky O'Grady (Committee Chair); Sanda Katila (Committee Member); Jerry Kalback (Committee Member) Subjects: Aesthetics; Architectural; Business Community; Design; Interior Design; Urban Planning
  • 2. Wachira, Alice Finite Difference Methods for Non-linear Interface Elliptic and Parabolic Problems

    Master of Arts (MA), Bowling Green State University, 2024, Mathematics/Mathematical Statistics

    Interface problems frequently appear in numerous physical, biological, and scientific contexts. These problems usually involve differential equations where the input data have discontinuities at one or more interface positions within the solution domain. In this thesis, a numerical method for solving one-dimensional elliptic and parabolic problems with linear and non-linear interface jump conditions at a single interface position is presented. To effectively solve these interface problems, we integrate jump conditions into the numerical method, ensuring that these conditions are met at the interface position. Our approach combines finite difference schemes with a technique termed the $a$-method, specifically devised to address the complexities associated with non-linear interface jump conditions. The convergence behavior of these methods in numerically solving elliptic and parabolic problems with linear interface jump conditions is also examined. Through our numerical examples, we demonstrate that our proposed method achieves an approximate first-order convergence. This occurs because the non-interface grid points exhibit second-order accuracy, while the interface points achieve only first-order accuracy, thereby lowering the overall convergence order to first order when evaluated using the maximum norm criterion.

    Committee: So-Hsiang Chou Ph.D. (Committee Chair); Tong Sun Ph.D (Committee Member) Subjects: Applied Mathematics; Mathematics
  • 3. Huang, Ruochen Enhancing Exponential Family PCA: Statistical Issues and Remedies

    Doctor of Philosophy, The Ohio State University, 2023, Statistics

    Exponential family PCA (Collins et al., 2001) is a widely used dimension reduction tool for capturing a low-dimensional latent structure of exponential family data such as binary data or count data. As an extension of principal component analysis (PCA), it imposes a low-rank structure on the natural parameter matrix, which can be factorized into two matrices, namely, the principal component loadings matrix and scores matrix. These loadings and scores share the same interpretation and functionality as those in PCA. Loadings enable exploration of associations among variables, scores can be utilized as low-dimensional data embeddings, and estimated natural parameters can impute missing data entries. Despite the popularity of exponential family PCA, we find several statistical issues associated with this method. We investigate these issues from a statistical perspective and propose remedies in this dissertation. Our primary concern arises from the joint estimation of loadings and scores through the maximum likelihood method. As in the well-known incidental parameter problem, this formulation with scores as separate parameters may result in inconsistency in the estimation of loadings under the classical asymptotic setting where the data dimension is fixed. We examine the population version of this formulation and show that it lacks Fisher consistency in loadings. Additionally, estimating scores can be viewed as performing a generalized linear model with loadings as covariates. Maximum likelihood estimation (MLE) bias is naturally involved in this process but is often ignored. Upon identifying two major sources of bias in the estimation process, we propose a bias correction procedure to reduce their effects. First, we deal with the discrepancy between true loadings and their estimates under a limited sample size. We use the iterative bootstrap method to debias loadings estimates. Then, we account for sampling errors in loadings by treating them as covariates with me (open full item for complete abstract)

    Committee: Yoonkyung Lee (Advisor); Asuman Turkmen (Committee Member); YunZhang Zhu (Committee Member) Subjects: Statistics
  • 4. Yang, Fang Nonlocal Priors in Generalized Linear Models and Gaussian Graphical Models

    PhD, University of Cincinnati, 2022, Arts and Sciences: Mathematical Sciences

    High-dimensional data, where the number of features or covariates is larger than the number of independent observations, are ubiquitous and are encountered on a regular basis by statistical scientists both in academia and in industry. Due to the modern advancements in data storage and computational power, the high-dimensional data revolution has significantly occupied mainstream statistical research. In this thesis, we undertake the problem of variable selection in high-dimensional generalized linear models, as well as the problem of high-dimensional sparsity selection for covariance matrices in Gaussian graphical models. We first consider a hierarchical generalized linear regression model with the product moment (pMOM) nonlocal prior over coefficients and examine its properties. Under standard regularity assumptions, we establish strong model selection consistency in a high-dimensional setting, where the number of covariates is allowed to increase at a sub-exponential rate with the sample size. The Laplace approximation is implemented for computing the posterior probabilities and the shotgun stochastic search procedure is suggested for exploring the posterior space. The proposed method is validated through simulation studies and illustrated by a real data example on functional activity analysis in fMRI study for predicting Parkinson's disease. Moreover, we consider sparsity selection for the Cholesky factor L of the inverse covariance matrix in high-dimensional Gaussian Directed Acyclic Graph (DAG) models. The sparsity is induced over the space of L via pMOM non-local prior, and the hierarchical hyper-pMOM prior. We also establish model selection consistency for Cholesky factor under more relaxed conditions compared to those in the literature and implement an efficient MCMC algorithm for parallel selecting the sparsity pattern for each column of L. We demonstrate the validity of our theoretical results via numerical simulations, and also use further s (open full item for complete abstract)

    Committee: Xuan Cao Ph.D. (Committee Member); Xia Wang Ph.D. (Committee Member); Seongho Song Ph.D. (Committee Member); Lili Ding Ph.D. (Committee Member) Subjects: Statistics
  • 5. Luo, Xiao The Effect of Orthographic Neighborhood Size and Consistency on Character and Word Recognition by Learners of Chinese as a Second Language and Native Chinese Speakers

    PhD, University of Cincinnati, 2021, Education, Criminal Justice, and Human Services: Educational Studies

    Among contemporary Chinese characters, approximately 80% are semantic-phonetic compound characters, which consist of a phonetic radical that signals pronunciations and a semantic radical that suggests meanings. A cluster of such characters sharing the same phonetic radicals are referred to as an orthographic neighborhood. Previous research suggested that neighboring characters, if they have consistent pronunciations, could produce facilitatory consistency effects on characters' naming by both native (L1) Chinese speakers and learners of Chinese as a second language (L2). However, the larger number of characters in an orthographic neighborhood, more errors and slower responses were observed in naming tasks among L1 Chinese speakers, suggesting an inhibitory neighborhood size (NS) effect. However, this NS effect has not been investigated in L2 Chinese learners' reading of single characters and twocharacter words. This dissertation aimed to fill this research gap by inviting 17 L2 Chinese learners and 35 L1 Chinese speakers (control group) to complete two studies. Study 1 focused on participants' reading of single semantic-phonetic compound characters. Experiment 1(a) used regular characters (i.e., a character's pronunciation is the same as that of its phonetic radical) whereas Experiment 1(b) used irregular characters. Both experiments adopted a 2 (NS) x 2 (consistency) x 2 (L1/L2 groups) repeated-measures design. Participants completed lexical decision tasks, and their reaction times (RTs) and accuracy data were collected and analyzed. ANOVA results of Experiment 1(a) showed significant main effects of NS and consistency as well as their interactions. A facilitatory consistency effect was found when L2 learners read small-NS characters. Results of Experiment 1(b) suggested a significant main effect of NS and that of consistency, but no significant interactions were found. iii Study 2 examined the effects of a semantic-phoneti (open full item for complete abstract)

    Committee: Dr. Hye K. Pae (Committee Chair) Subjects: Education; Education Philosophy
  • 6. Patil, Vivek Criteria for Data Consistency Evaluation Prior to Modal Parameter Estimation

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Mechanical Engineering

    Experimental modal analysis (EMA), which is an integral part of vibration analysis, deals with finding the dynamic characteristics of a system, namely the natural frequencies, damping, and the corresponding mode shapes. One of the essential requirements to ensure the validity of results obtained in experimental modal analysis (EMA) is the consistency or regularity of the input-output measurements. The non-conformity of the measurement set to the constraints can result in deviation from the structural system's actual characteristics. A consistent data set meets the constraints of methods in modal parameter estimation (MPE) and uniformly portrays identical information. Many validation procedures in the form of principal component analysis (PCA), synthesis correlation coefficient, modal assurance criterion (MAC) exist as a post MPE check to verify the estimated model's quality and hence of the measured data. However, since these methods are employed post-MPE, there is a need for pre-MPE methods to check the validity of measurements beforehand and save any additional effort. This thesis work attempts to develop some of the data sanity checks over the collected experimental data before the modal estimation procedure is performed to ensure that the measurements are consistent with respect to each other and comply with experimental modal analysis assumptions. Building on the concept of reciprocity and driving point, MATLAB based tools are developed which identifies the driving points and performs consistency evaluation. A system equivalent reduction-expansion process (SEREP) based method is then explored, which checks each measurement's consistency to the entire data set. The calibration consistency is evaluated by calculating the frequency response assurance criterion (FRAC) and frequency response scale factor (FRSF) values between the cross-frequency response function measurements. The performance of these developed methods is then tested using analytical and exper (open full item for complete abstract)

    Committee: Randall Allemang Ph.D. (Committee Chair); Michael Mains M.S. (Committee Member); Allyn Phillips Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 7. Marsden, Courtney Academic Freedom in the Age of Posts and Tweets

    PHD, Kent State University, 2021, College of Education, Health and Human Services / School of Foundations, Leadership and Administration

    This study had two overarching goals: (1) to develop scales with acceptable psychometric properties and (2) to contribute to the growing body of literature on the spiral of silence theory (SoST) in the context of Social Networking Sites (SNSs). These goals were addressed with findings from two manuscripts, both of which leveraged data from a survey distributed to faculty at a large, public university in the U.S. (N = 256). Goal 1 was addressed with results from Rasch Principal Components Analysis (PCA), Rasch Rating Scale Model (RSM), and indices of Internal Consistency Reliability: (1) Contextual Fear of Social Isolation (CFSI; α = .85 and α = .69), (2) Contextual Fear of Reprimand (CFEAR; α = .88), (3) Fear of Reprimand (FEAR; α = .96 and α = .81), and (4) Willingness to Express Opinions (WTEO; α = .91 and α = .94). Goal 2 was addressed with results from a series of Two-Group Multivariate Analysis of Variance (MANOVAs) and Conditional Process Analyses. Using Noelle-Neumann's (1974; 1993) SoST as a foundation, a moderated mediation model was used to describe the relationship among opinion congruence, contextual fear of social isolation, issue involvement, and willingness to express opinions (Fear of Isolation-Consequences and -Perceptions Models). This study also addressed a criticism of the SoST by testing an alternative catalyst (Fear of Sanctions Model). The MANOVAs were significant, while the Index of Moderated Mediation was not for any models. Additional research with a topic that can elicit a wider range of reactions is needed.

    Committee: Aryn C. Karpinski PhD (Committee Chair); Jason Schenker PhD (Committee Member); Anthony Vander Horst PhD (Committee Member) Subjects: Higher Education
  • 8. Hong, Ellen Understanding the Antecedents of Perceived Authenticity to Predict Cultural Tourists' Behavioral Intention: The Case of Cambodia's Angkor Wat

    Hospitality and Tourism, Ohio University, 2021, Hospitality and Tourism, College of Education

    Culture has been long identified by numerous marketers and consumer behavioral theorists as an important factor influencing tourists' traveling intentions. Tourist behavioral intention is also a prominent topic of interest for many researchers within the hospitality industry. This paper aim to determine the antecedents of cultural tourists' behavioral intention specifically in the case of Cambodia's cultural heritage site, Angkor Wat temple. Factors including uniqueness, scarcity, longevity, longitudinal consistency, perceived authenticity (object-based and existential), and behavioral intention were examined using Structural Equation Model (SEM). This study offers insights into the factors that affect tourists' intention to visit Angkor Wat. Cambodia's related government body and destination marketers can utilize this knowledge to create effective marketing campaigns as a mean to boost Cambodia's economy through tourism.

    Committee: Hyeyoon-Rebecca Choi (Advisor); V. Ann Paulins (Committee Member); Boss David (Committee Member) Subjects: Behavioral Sciences
  • 9. Hendley, Debbie Insomnia, Race, and Mental Wellness

    Psy. D., Antioch University, 2019, Antioch Santa Barbara: Clinical Psychology

    This phenomenological study examines the experiences of insomnia among sixteen Americans who are descendants of people who lived in the United States during chattel slavery. The investigation is guided by the following two central questions: Is the lived experience of insomnia among African Americans the same as the experience among non-Hispanic White Americans? In addition, what is the lived experience of sleep among African Americans and Non-Hispanic White Americans? Each participant met individually with the researcher and privately reflected on their experience with insomnia defined here as a condition in which individuals have difficulty initiating and maintaining sleep that furthermore affects their daytime functioning. As the investigation unfolded, the researcher studied the experiences of the participants through a multimodal lens informed primarily by Festinger's Cognitive Dissonance Theory and Heidegger's Hermeneutics. As participants of this research investigation reflect on their experience, we observe the interplay between insomnia, race, and mental wellness coming into focus. Emotional experiences are captured, and the reflective experience allows for a re-examination of the legacies and effects of American history. Findings in this study support the notion that people tend to use cognitive dissonance when their beliefs are challenged, and those participants with a preference for consistency also experienced insomnia more frequently. No evidence was uncovered of the participants' insomnia being a direct effect of the inter-generational transmission of the trauma associated with chattel slavery. However, many African American families continue to report being severely negatively impacted by their ancestors' experiences during slavery and its aftermath. Insomnia, a common symptom of posttraumatic stress disorder can credibly be considered one likely sequela of the traumatic impact of slavery on the lives of African Americans. This Dissertation is ava (open full item for complete abstract)

    Committee: Daniel Schwartz PhD (Committee Chair); Kia-Keating Kia-Keating EdD (Committee Member); Kimberly Finney PsyD (Committee Member) Subjects: Cognitive Psychology; Minority and Ethnic Groups; Psychology
  • 10. Stoll, Kevin Methodologies for Missing Data with Range Regressions

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2019, Statistics

    A primary focus of this dissertation is to draw inferences about a response variable that is subject to being missing using large samples. When some response variables are missing and the missing behavior is dependent on the response variable, simply using the sample mean of the non-missing responses to estimate the population mean is biased in general. There are, however, historical mean estimators that can circumvent the bias. Examples include the inverse propensity weighted, regression, double-robust, stratification, and empirical likelihood estimators. In order to obtain an appropriate estimate on the targeted population mean, these methods place greater weight on non-missing observations likely to be missing. We review the historical estimators, and we propose new estimators and methodologies for mean estimation and beyond. The consistency of each estimator primarily depends on the existence of non-missing covariates, the missing at random assumption, and a correctly specified model relating the covariates to the missing behavior or response, each of which is discussed. Among our proposals lie new double-robust estimators which obtain lower variance than historical methods when the regression or propensity function is known and yield competitive performances when regression and propensity functions are estimated. Additionally, we detail bootstrap approaches which enable researchers to efficiently draw inferences beyond mean estimation. Furthermore, we rework range regression for missing response variables, but also develop nonparametric range regression which models the average rank versus each bin. We argue the average rank to be superior to the median and mean for measuring trends among the bins particularly when researchers seek distributional superiority or when the sample mean is not guaranteed to converge, e.g. under Cauchy response. In doing so, we define ascendancy, a measure of pairwise distributional superiority. Then, we interpret the re (open full item for complete abstract)

    Committee: John Chen (Advisor); Wei Ning (Committee Member); Craig Zirbel (Committee Member); Helen Michaels (Other) Subjects: Statistics
  • 11. Shi, Rong Efficient data and metadata processing in large-scale distributed systems

    Doctor of Philosophy, The Ohio State University, 2018, Computer Science and Engineering

    Research for large-scale system is challenging because deploying a large system needs a great amount of resources. My approach to address this problem is based on the observation that most large-scale systems follow a "centralized metadata and sharded data" design, which provides opportunities to extend work at small scales to large scales: for data processing, optimization in one shard can automatically be extended to all shards because they all execute the same protocol; for metadata processing, repetition of behaviors from different data shards allows us to extrapolate their requests to the centralized metadata server, making it possible to stress test a metadata server with limited number of machines. First, to optimize data processing, we focus on replication protocols, because they are widely used in today's distributed systems to protect data against failures and replicating data brings a significant cost. Typically, stronger replication protocols that can tolerate more kinds of errors require more replicas. In our work, we propose a general approach to reduce the replication cost of asynchronous state machine replication protocols, while maintaining their availability properties. For example, our approach can reduce the number of replicas of Paxos from 2f+1 to f+1. Second, to optimize metadata processing, we find the key is to be able to evaluate metadata servers: to reduce overhead on centralized metadata servers, existing systems tries to minimize traffic to the metadata servers, but this brings a challenge that their problems are hard to observe at small scales. In our work, we propose PatternMiner, a tool that extrapolates system's messages at a large scale based on logged system's messages to metadata service at small scales. And then we can play generated workload on the targeted metadata service to measure its throughput and analyze its behavior at large scale. Our evaluation on two types of metadata services of YARN framework, HDFS NameNode and (open full item for complete abstract)

    Committee: Yang Wang (Advisor); Xiaodong Zhang (Committee Member); Feng Qin (Committee Member); Spyros Blanas (Committee Member) Subjects: Computer Engineering; Computer Science
  • 12. Gao, Yang An Exploratory Sequential Study of Chinese EFL Teachers' Beliefs and Practices in Reading and Teaching Reading

    PHD, Kent State University, 2018, College of Education, Health and Human Services / School of Teaching, Learning and Curriculum Studies

    This mixed-methods study explored characteristics of Chinese EFL teachers' beliefs of reading and teaching reading. In addition, it investigated the relationship between English as a foreign language (EFL) teachers' stated beliefs and their actual practices. Specifically, two relationships were explored in this study. The first one was whether EFL teachers' stated beliefs about reading are in/consistently indicated in their stated beliefs about teaching reading. Second, the study also aimed to understand whether EFL teachers' stated beliefs about how they teach English reading are consistent with their actual practices in classrooms. Participants in the study included 96 university EFL teachers who were faculty members from three different universities in a city in Northeast China. Within an exploratory sequential mixed-methods design, data collection and analysis occurred in two phases. The first part was a quantitative survey of 10 open-ended questions modified according to Burke Reading Interview (BRI). It solicited the participants' beliefs about reading and teaching reading. Statistical analysis was then conducted to describe the data collected in this quantitative part. For the second, qualitative part, classroom observations were used to collect data on teachers' actual practices. The findings of the study showed that three theoretical orientations about reading (behaviorism, cognitivism, and constructivism) were matrixed with three different belief systems (dominant, dual, and multiple belief systems). The matrix indicated a complex belief system about reading and teaching reading among these EFL teachers. Within the matrix, relationships among different beliefs were non-linear and unpredictable. In terms of the constructivist theoretical orientation, the findings indicated a statistically significant but weak association between stated beliefs about reading and stated beliefs about teaching reading. The findings also indicated both consistencies (open full item for complete abstract)

    Committee: William Bintz (Committee Chair); Denise Morgan (Committee Member); Sarah Rilling (Committee Member) Subjects: Education; English As A Second Language; Language; Linguistics
  • 13. BHUJEL, MAN Performance Enhancement of Data Retrieval from Episodic Memory in Soar Architecture

    Master of Science, University of Toledo, 2018, Electrical Engineering

    Episodic memory has been the key component of various intelligent and cognitive architecture that includes the autobiographical events of past experiences. The implementation of episodic memory enhances the performance of cognitive agents by utilizing past history for decision making. During episodic retrieval in Soar architecture, the cue matching step involves a two stage process to improve the performance of the architecture. Soar implements cue matching as a surface cue analysis, which fi nds candidate episodes based on the matched leaf node. It then performs a structural match on candidate episodes with full surface match. However, continuous design research is still needed for minimizing the operational time of episodic processes along with timely cue-matching as episodic memory grows. This thesis provides an insight to improve on both stages of cue matching, which ultimately leads to a quick retrieval of episodes. First, the approximations of original implementation base-level activation (BLA) is implemented for determining the feature weight for surface cue analysis. These methods are computationally efficient. Second, a new approach of solving the constraint satisfaction problem (CSP), arc-consistency algorithm is implemented for the structural cue analysis. For the experiment, two of the most frequently used testing environment, Eaters and TankSoar, are chosen. From the first experiment, it is found that the Eaters agent provides comparable performance with approximations of BLA and demonstrate applications of approximations. The approximation of BLA has high computational efficiency. Determining activation value of working memory elements (WMEs) is a part of cue matching; hence, incorporating approximation of BLA leads to faster retrieval of episodic memory. Also, using a new technique for structural graph match, this thesis obtains less query time compared to original one. This also leads to decrease in retrieval time. Moreover, the results sh (open full item for complete abstract)

    Committee: Vijay Devabhaktuni (Committee Chair); Ahmad Javaid (Committee Co-Chair); Devinder Kaur (Committee Member) Subjects: Electrical Engineering
  • 14. Liu, Tuo Model Selection and Adaptive Lasso Estimation of Spatial Models

    Doctor of Philosophy, The Ohio State University, 2017, Economics

    Various spatial econometrics models have been proposed to characterize spatially correlated data. As economic theories provide little guidance on constructing a true model, we are often faced with the problem to choose among spatial econometrics models. My dissertation develops a Vuong-type test and an adaptive Lasso procedure that complement existing spatial model selection methods in several aspects. Chapter 1 develops a likelihood-ratio test for model selection between two spatial econometrics models. It generalizes Vuong (1989) to models with spatial near-epoch dependent (NED) data. We measure the distance from a model to a data generating process by Kullback-Leibler Information Criterion and test the null hypothesis that two models are equally close to the data generating process. We make no assumption on the model specification of the truth and allow for the cases where both, either or neither of the two competing models is mis-specified.As a prerequisite of the test, we first show that the quasi-maximum likelihood estimators (QMLE) of spatial econometrics models are consistent estimators of their pseudo-true values and are asymptotically normal under regularity conditions. In particular, we study spatial autoregressive models with spatial autoregressive errors (SARAR) and matrix exponential spatial specification (MESS) models. With asymptotic properties of QMLEs and limit theorems for NED random fields, we then derive the limiting null distribution of the test statistic. A spatial heteroskedastic and autoregressive consistent estimator of asymptotic variance of the test statistic under the null, which is necessary to implement the test, is constructed. Monte Carlo experiments are designed to investigate finite sample performance of QMLEs for SARAR and MESS models, as well as the size and power of the proposed test. Chapter 2 proposes a penalized maximum likelihood approach with adaptive Lasso penalty to estimate SARAR models. It allows for simultaneou (open full item for complete abstract)

    Committee: Lung-fei Lee (Advisor); Jason Blevins (Committee Member); Mehmet Caner (Committee Member) Subjects: Economics
  • 15. BABI, MAMDOUH Byzantine Fault Tolerant Collaborative Editing

    Doctor of Engineering, Cleveland State University, 0, Washkewicz College of Engineering

    Collaborative work applications involve shared views by multiple users. In a collaborative editing system, multiple users can view, edit, and save the same document simultaneously. Therefore, any infrastructure in collaborative work must support consistency and some type of concurrency. Some systems support strict consistency. Driven by the needs for highly reliable real-time collaborative editing systems, I am introducing a lightweight solution for protecting real-time collaborative editing systems against Byzantine faults. The Byzantine Fault Tolerance (BFT) mechanisms are being used to protect such systems from malicious faults. I observe that a centralized coordination algorithm not only reduces the complexity of the editing system, but it also makes it easier to harden the system with Byzantine fault tolerance. In this dissertation, a comprehensive analysis of the potential threats towards collaborative editing systems will be described and a set of Byzantine fault tolerance mechanisms without requiring any additional redundant resources will be introduced. If the system has sufficient redundancy, such mechanisms can be used to ensure strong protection against various malicious faults. Even without sufficient redundancy in the system, mechanisms outlined in this dissertation would still help limit the damages caused by a faulty participant. My contributions are outlined as follows: (1) A case will be made to favor the use of centralized coordination algorithms for real-time collaborative editing systems. (2) A comprehensive threat analysis on collaborative editing systems will be performed. (3) A set of lightweight BFT mechanisms that can be used to protect such editing systems from malicious faults without restoring to additional redundant resources will be presented. It has been shown, during my threat analysis, that threats from faulty participant and/or from publisher can case a serious damage to the system. It has also been shown that (open full item for complete abstract)

    Committee: Wenbing Zhao Ph.D. (Advisor); Lili Dong Ph.D. (Committee Member); Timothy Arndt Ph.D. (Committee Member); Janche Sang Ph.D. (Committee Member); Sanchita Mal-Sarkar Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science; Engineering
  • 16. Hosseinyalamdary , Saivash Traffic Scene Perception using Multiple Sensors for Vehicular Safety Purposes

    Doctor of Philosophy, The Ohio State University, 2016, Civil Engineering

    Autonomous driving is an emerging technology, preventing accidents on the road in future. It, however, faces many challenges because of various environmental conditions and limitations of sensors. In this dissertation, we study multiple sensor integration to overcome their limitations and reliably perform missions enabling autonomous driving. The laser scanner point cloud is a rich source of information, suffers from low resolution, especially for farther objects. We generalize 2D super-resolution approaches applied in image processing to the 3D point clouds. Two variants are developed for the 3D super-resolution: the dense point cloud is generated in such a way that it follows the geometry of the original point cloud; the brightness of the images are utilized to generate the dense point cloud. The results show our proposed approach successfully improves the density of the point cloud, preserves the edges and corners of objects, and provides more realistic dense point cloud of objects relative to the existing surface reconstruction approaches. The static and moving objects must be detected on the road, the moving objects must be tracked, and trajectory of the platform must be designed to avoid accidents. The densified point clouds are integrated with other sources of information, including the GPS/IMU navigation solution and GIS maps, to detect the objects on the road and track the moving ones. The results show static and moving objects are detected, the moving objects are accurately tracked, and their pose is estimated. In addition to obstacle avoidance, the autonomous vehicles must detect and obey the traffic lights and signs on the road. Due to the variations in the traffic lights, we propose Bayesian statistical approach to detect them. The spatio-temporal consistency constraint is applied to provide coherent traffic light detection in space and time. In addition, conic section geometry is utilized to estimate the position of the traffic lights with (open full item for complete abstract)

    Committee: Alper Yilmaz (Advisor); Charles Toth (Committee Member); Ralph von Frese (Committee Member) Subjects: Civil Engineering
  • 17. Guzman, Gregory Intertemporal Choice and Enrollment: Exploring the Influence of Latency on Enrollment Yield within the Recruitment Funnel

    Doctor of Philosophy, University of Toledo, 2014, Higher Education

    The higher education marketplace in the United States has changed. Competition has increased, and modes of instructional delivery have changed to meet demand, yet enrollment at post-secondary institutions in the United States has been declining. Students have not persisted through the pre-matriculation funneling stages of the enrollment process with the same consistency as they have in the past. The purposes of this dissertation were (a) to assess the period of latency between application and enrollment and (b) to determine whether students would be more likely to persist through the recruitment funnel if institutions altered their enrollment calendars. The researcher reviewed data from a single-proprietary institution comprised of multiple campuses located throughout the eastern and southern portions of the United States to determine the influence of latency, within the recruitment funnel, upon yield. Upon the exploration of nearly 4 years worth of data and more than 32,000 student files, the researcher was able to determine that increasing the number of start dates did not practically influence students'/consumers' purchasing behavior at Career College. Furthermore, shortening the latency period did little to nothing to impact the percentage of students persisting through the recruitment funnel. However, the findings did reveal significant behavioral differences between traditional and non-traditional students. In summation, the findings revealed that students are essentially consumers who will act upon their desire to purchase products (e.g., a college degree) in a time frame consistent with their own immediate needs and opportunity costs, regardless of institutional efforts to influence them to do otherwise. In other words, latency is an institutionally controllable factor that does not appear to alter the course of enrollment yield among traditional students.

    Committee: Penny Poplin Gosetti Ph.D. (Committee Chair); Ronald Opp Ph.D. (Committee Member); William Getter D.P.A. (Committee Member); David Black Ph.D. (Committee Member) Subjects: Economics; Higher Education; Higher Education Administration
  • 18. Som, Agniva Paradoxes and Priors in Bayesian Regression

    Doctor of Philosophy, The Ohio State University, 2014, Statistics

    The linear model has been by far the most popular and most attractive choice of a statistical model over the past century, ubiquitous in both frequentist and Bayesian literature. This dissertation studies the modeling implications of many common prior distributions in linear regression, including the popular g prior and its recent ameliorations. Formalization of desirable characteristics for model comparison and parameter estimation has led to the growth of appropriate mixtures of g priors that conform to the seven standard model selection criteria laid out by Bayarri et al. (2012). The existence of some of these properties (or lack thereof) is demonstrated by examining the behavior of the prior under suitable limits on the likelihood or on the prior itself. The first part of the dissertation introduces a new form of an asymptotic limit, the conditional information asymptotic, driven by a situation arising in many practical problems when one or more groups of regression coefficients are much larger than the rest. Under this asymptotic, many prominent “g-type” priors are shown to suffer from two new unsatisfactory behaviors, the Conditional Lindley's Paradox and Essentially Least Squares estimation. The cause behind these unwanted behaviors is the existence of a single, common mixing parameter in these priors that induces mono-shrinkage. The novel block g priors are proposed as a collection of independent g priors on distinct groups of predictor variables and improved further through mixing distributions on the multiple scale parameters. The block hyper-g and block hyper-g/n priors are shown to overcome the deficiencies of mono-shrinkage, and simultaneously display promising performance on other important prior selection criteria. The second part of the dissertation proposes a variation of the basic block g prior, defined through a reparameterized design, which has added computational benefits and also preserves the desirable properties of the original formulat (open full item for complete abstract)

    Committee: Christopher Hans (Advisor); Steven MacEachern (Advisor); Mario Peruggia (Committee Member) Subjects: Statistics
  • 19. Sierra Cadavid, Andrea Multicomponent Quality Control Analysis for the Tomato Industry Using Portable Mid-Infrared (MIR) Spectroscopy

    Master of Science, The Ohio State University, 2014, Food Science and Technology

    Tomatoes are the second most important crop after potatoes around the world and most of them are consumed as processed tomatoes such as tomato paste, tomato sauce and ketchup. Several quality parameters need to be controlled during processing in order to ensure the desirable quality of the final product. Some of the most commonly used quality traits in tomato industry are pH, °Brix, Bostwick consistency, predicted paste Bostwick, Ostwald value, serum viscosity, color, reducing sugar and organic acids content. One alternative that has been studied to provide rapid and reliable analysis of quality traits in tomato products is infrared (FT-IR) spectroscopy which has shown good prediction of the soluble solids, titratable acidity, pH, reducing sugars, organic acids and carotenoids content in different tomato products with little sample preparation; however, rheological properties had not been predicted using this novel technology. Previous studies for determination of quality traits in tomato products using FT-IR spectroscopy have required some sample preparation, such as filtration and centrifugation, which reduces the rapidness of the methodology and increases the cost of the analysis since filters and syringes are required. Therefore, our objective was to develop a rapid and reliable method using a portable FT-IR spectrometer for the determination of important quality traits in tomato juice, including rheological characteristics, using direct measurements on the tomato juice. Tomato juice samples (n=180) from varieties and breeding lines encompassing a wide range of quality characteristics were obtained from California. At the time the juice was produced, serum viscosity (Ostwald), consistency (Bostwick), pH and °Brix (refractive index), were measured. Juice samples were shipped to The Ohio State University where spectra were collected in duplicate using a portable FT-IR in the transmittance mode (50µm) and reducing sugars (glucose, fructose) concentrations were dete (open full item for complete abstract)

    Committee: Luis Rodriguez-Saona Dr. (Advisor); Lynn Knipe Dr. (Committee Member); Monica Giusti Dr. (Committee Member) Subjects: Food Science
  • 20. Gretton, Jeremy Examining Inference Processes Underlying Knowledge Complexity Effects on Attitude-Behavior Consistency

    Master of Arts, The Ohio State University, 2013, Psychology

    The present research had two main goals. First, the research examined whether deliberation at the time of behavior changes the effect of knowledge complexity on attitude-behavior consistency. Second, the research examined the hypothesized inference processes underlying knowledge complexity effects on attitude-intention consistency for higher-deliberation behavior (Fabrigar, Petty, Smith, & Crites, 2006). These studies used a department store paradigm, in which participants formed attitudes toward two fictional department stores, varying in favorability toward each store (within-subjects) and the complexity of knowledge presented to create the attitudes (between-subjects). Participants indicated which store they would choose when making a purchase involving a product for which they had not received information in the store descriptions. The experiments all also manipulated (between-subjects) participants' ability to deliberate about their report of their behavioral intentions. A pilot study, as well as a series of four other studies, failed to find an effect of deliberation at the time of intention on the (enhancing) effect of knowledge complexity on attitude-intention consistency. These results suggested that knowledge complexity could have effects under both relatively low- and high-deliberation settings. Previous researchers had hypothesized that knowledge complexity would be associated with inferences that the attitude was more useful or applicable to the purchase decision, and that these inferences would mediate complexity effects under higher deliberation (Fabrigar et al., 2006). I suspected that such inferences would be responsible for knowledge complexity effects under high-deliberation settings but not under lower-deliberation settings. Although Experiment 1A provided evidence consistent with these effects, further analyses were inconsistent across three follow-up studies. Knowledge complexity was associated with higher rated inference of applicab (open full item for complete abstract)

    Committee: Duane Wegener Dr. (Advisor); Russell Fazio Dr. (Committee Member); Petty Richard Dr. (Committee Member) Subjects: Behavioral Sciences; Psychology; Social Psychology