Search Results (1 - 25 of 151 Results)

Sort By  
Sort Dir
 
Results per page  

Imbulgoda Liyangahawatte, Gihan Janith MendisHardware Implementation and Applications of Deep Belief Networks
Master of Science in Engineering, University of Akron, 2016, Electrical Engineering
Deep learning is a subset of machine learning that contributes widely to the contemporary success of artificial intelligence. The essential idea of deep learning is to process complex data by abstracting hierarchical features via deep neural network structure. As one type of deep learning technique, deep belief network (DBN) has been widely used in various application fields. This thesis proposes an approximation based hardware realization of DBNs that requires low hardware complexity. This thesis also explores a set of novel applications of the DBN-based classifier that will benefit from a fast implementation of DBN. In my work, I have explored the application of DBN in the fields of automatic modulation classification method for cognitive radio, Doppler radar sensor for detection and classification of micro unmanned aerial systems, cyber security applications to detect false data injection (FDI) attacks and localize flooding attacks, and applications in social networking for prediction of link properties. The work in this thesis paves the way for further investigation and realization of deep learning techniques to address critical issues in various novel application fields.

Committee:

Jin Wei (Advisor); Arjuna Madanayaka (Committee Co-Chair); Subramaniya Hariharan (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering; Electrical Engineering; Engineering; Experiments; Information Technology

Keywords:

deep belief networks; multiplierless digital architecture; Xilinx FPGA implementations; low-complexity; applications of deep belief networks; spectral correlation function; modulation classification; drone detection; doppler radar; cyber security

Moharreri, KayhanAugmenting Collective Expert Networks to Improve Service Level Compliance
Doctor of Philosophy, The Ohio State University, 2017, Computer Science and Engineering
This research introduces and develops the new subfield of large-scale collective expert networks (CEN) concerned with time-constrained triaging which has become critical to the delivery of increasingly complex enterprise services. The main research contribution augments existing human-intensive interactions in the CEN with models that use ticket content and transfer sequence histories to generate assistive recommendations. This is achieved with a recommendation framework that improves the performance of CEN by: (1) resolving incidents to meet customer time constraints and satisfaction, (2) conforming to previous transfer sequences that have already achieved their Service Levels; and additionally, (3) addressing trust to encourage adoption of recommendations. A novel basis of this research is the exploration and discovery of resolution process patterns, and leveraging them towards the construction of an assistive resolution recommendation framework. Additional interesting new discoveries regarding CENs include existence of resolution workflows and their frequent use to carry out service-level-effective resolution on regular content. In addition, the ticket-specific expertise of the problem solvers and their dynamic ticket load were found to be factors in the time taken to resolve an incoming ticket. Also, transfers were found to reflect the experts' local problem-solving intent with respect to the source and target nodes. The network performs well if certain transfer intents (such as resolution and collective) are exhibited more often than the others (such as mediation and exploratory). The assistive resolution recommendation framework incorporates appropriate strategies for addressing the entire spectrum of incidents. This framework consists of a two-level classifier with the following parts: (1) content tagger for routine/non-routine classification, (2) A sequence classifier for resolution workflow recommendation, (3) Response time estimation based on learned dynamics of the CEN (i.e. Expertise, and ticket load), and (4) transfer intent identification. Our solution makes reliable proactive recommendations only in the case of adequate historical evidence thus helping to maintain a high level of trust with the interacting users in the CEN. By separating well-established resolution workflows from incidents that depend on experts’ experiential and `tribal' knowledge for the resolution, this research shows a 34% performance improvement over existing content-aware greedy transfer model; it is also estimated that there will be a 10% reduction in the volume of service-level breached tickets. The contributions are shown to benefit the enterprise support and delivery services by providing (1) lower decision and resolution latency, (2) lower likelihood of service-level violations, and (3) higher workforce availability and effectiveness. More generally, the contributions of this research are applicable to a broad class of problems where time-constrained content-driven problem-solving by human experts is a necessity.

Committee:

Jayashree Ramanathan (Advisor); Rajiv Ramnath (Committee Member); Srinivasan Parthasarathy (Committee Member); Gagan Agrawal (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Information Science; Information Technology

Keywords:

IT Service Management, Collective Expert Networks, Process Discovery, Ticket Routing Recommendations, Resolution Time Estimation, Event Mining, IT Service Support, Service Level Compliance, Human-in-the-loop, Learning from Enterprise Event Data

Shoop, Jessica ASENIOR INFORMATION TECHNOLOGY (IT) LEADER CREDIBILITY: KNOWLEDGE SCALE, MEDIATING KNOWLEDGE MECHANISMS, AND EFFECTIVENESS
Doctor of Philosophy, Case Western Reserve University, 2017, Management
This dissertation explains leader effectiveness in the context of the senior information technology (IT) leader who plays a pivotal role in the execution and delivery of corporate IT services. Considered leaders of leaders, senior IT leaders typically report to the Chief Information Officer (CIO). Using a sequential three-phase mixed methods study the thesis makes four contributions; (1) through qualitative inquiry shows that effective senior IT leaders maintain a balance of domain knowledge and emotional and social aptitudes; (2) develops and validates a four-dimensional scale to measure the level of IT leader domain knowledge; (3) demonstrates nomological and predictive validity of the scale and evaluates the impact of IT leader domain knowledge in solving managerial problems and brokering knowledge to others; (4) the studies combine to a build cohesive argument that leadership credibility wherein technical domain knowledge forms the other component is a critical antecedent for leadership effectiveness. The validation is founded on a sample of 104 senior IT leaders and 490 IT leader subordinates within a global IT service firm. Overall, our findings suggest that the so far neglected effect of IT domain knowledge forms not only an important but vital component influencing overall senior IT leader effectiveness. This has consequences for both established theories of leader credibility and leader effectiveness in highly specialized technical domains. Practically the study underscores the importance of hiring and maintaining senior IT leaders with strong technical credentials.

Committee:

Kalle Lyytinen, Ph.D. (Committee Chair); Jagip Singh, Ph.D. (Committee Member); Genevieve Bassellier, Ph.D. (Committee Member); John King, Ph.D. (Committee Member)

Subjects:

Business Administration; Information Systems; Information Technology; Management

Keywords:

Senior IT Leaders; Leadership Effectiveness; Credibility; Domain Knowledge; Leader Knowledge; Knowledge Mechanisms; Scale Development; Multi-dimensional Scale Validity; Mixed Methods

Roy, EnakshiSocial Media, Censorship and Securitization in the United States and India
Doctor of Philosophy (PhD), Ohio University, 2017, Journalism (Communication)
Using the theoretical perspectives of Spiral of Silence and Securitization, this dissertation examines (1) how censorship practices such as content removal were employed by the United States and the Indian governments to securitize the internet and social media, and (2) whether such practices contribute to an online spiral of silence. To explore these aspects, this study used a mixed-method approach with in-depth interviews and surveys. Seven interviews with authors of Transparency Reports and legal experts provided information about the U.S. and Indian government-initiated content removal process from Google Web Search, Blogger, YouTube, Facebook and Twitter between 2010 and 2015. Surveys with 587 respondents from the United States and India explored self-censorship on Facebook and Twitter, on issues related to national security and government criticism. The findings indicate that in the United States, “defamation” is the frequently cited yet an often-misused reason for content removal, while in India “religious offense” and “defamation” are prominent reasons for content takedowns. On several occasions, protected speech was removed from the internet and social media in both countries. Such acts of state-level censorship, in turn impacts self-censoring on controversial issues by individuals on social media. The implications here are that using the law to criminalize dissent increases self-censorship and this is counter-productive to democratic discourse.

Committee:

Yusuf Kalyango, Jr., Ph.D. (Committee Chair); Aimee Edmondson, Ph.D. (Committee Member); Eve Ng, Ph.D. (Committee Member); Nukhet Sandal, Ph.D. (Committee Member)

Subjects:

Communication; Information Technology; International Law; Journalism; Legal Studies; Mass Communications; Mass Media; Technology

Keywords:

Transparency Report; Internet censorship; Internet censorship USA, India; Internet Securitization; Spiral of Silence public opinion; public opinion social media; social media censorship; content removal; Google, Facebook, Twitter transparency reporting

Dhar, SamirAddressing Challenges with Big Data for Maritime Navigation: AIS Data within the Great Lakes System
Doctor of Philosophy, University of Toledo, 2016, Spatially Integrated Social Science
The study presented here deals with commercial vessel tracking in the Great Lakes using the Automatic Identification System (AIS). Specific objectives within this study include development of methods for data acquisition, data reduction, storage and management, and reporting of vessel activity within the Great Lakes using AIS. These data show considerable promise in tracking commodity flows through the system as well as documenting traffic volumes at key locations requiring infrastructure investment (particularly dredging). Other applications include detecting vessel calls at specific terminals, locks and other navigation points of interest. This study will document the techniques developed to acquire, reduce, aggregate and store AIS data at The University of Toledo. Specific topics of the paper include: data reducing techniques to reduce data volumes, vessel path tracking, estimate speed on waterway network, detection of vessel calls made at a dock, and a data analysis and mining for errors within AIS data. The study also revealed the importance of AIS technology in maritime safety, but the data is coupled with errors and inaccuracy. These errors within the AIS data will have to be addressed and rectified in future to make the data accurate and useful. The data reduction algorithm shows a 98% reduction in AIS data making it more manageable. In future similar data reduction techniques can possibly be used with traffic GPS data collected for highways and railways.

Committee:

Peter Lindquist (Committee Chair); Kevin Czajkowski (Committee Member); Neil Reid (Committee Member); Mark Vonderembse (Committee Member); Richard Stewart (Committee Member)

Subjects:

Geographic Information Science; Geography; Information Technology; Remote Sensing; Social Research; Transportation

Keywords:

Automatic Identification System , AIS, Big Data, Data Reduction Technique, Vessel Path, Vessel Call, Great Lakes, Maritime, VTS

Yang, LiuEffect of product review interactivity, social inequality, and culture on trust in online retailers: A comparison between China and the U.S.
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2017, Media and Communication
This study is the first study that compared the predicting strength of the effect of the micro factor (interactivity of product review use experiences) and macro factors (social inequality and culture) on consumers’ trust in online retailers. It examines the predictor of online trust by information asymmetry theory, reciprocity, in-group favoritism and out-group derogation, and social presence. Consumers of the two largest e-commerce sites in the United States and China, Amazon and Tmall, are compared. The results show the interactivity of product use experience is the strongest predictor of consumers’ trust in online retailers compared to social inequality and culture. The interactivity is positively related to consumers’ trust in famous brands, third-party retailers, and fulfilled third-party retailers of both Amazon and Tmall. In contrast, social inequality is negatively related to consumers’ trust in famous brands, third-party retailers, and fulfilled third-party retailers of both Amazon and Tmall. Individualism is positively related to trust in third-party retailers while collectivism is positively related to trust in third-party retailers fulfilled by Amazon or Tmall. Power distance exerts a positive impact on trust in famous brands only. Collectivism plays a more critical role in predicting trust in fulfilled online retailers in Chinese sample than in the U.S. sample. The relationship of trust in online retailers and consumers’ actual online purchases is different across countries. Trust in online retailers is an important direct predictor of online purchase diversity and indirect predictor of the amount of money spent online in both the U.S. and China. And it is a direct predictor of online purchase frequency in the U.S., but an indirect predictor of purchase frequency in China. Trust in online retailers is positively related to the amount of money spent on Amazon/Tmall indirectly by affecting shopping frequency on Amazon/Tmall.

Committee:

Louisa Ha, Professor (Advisor); Gi Woong Yun, Associate Professor (Committee Member); Lisa Hanasono, Associate Professor (Committee Member); Philip Titus, Associate Professor (Committee Member)

Subjects:

Comparative; Information Technology; Marketing; Mass Communications; Mass Media; Social Research

Keywords:

interactivity; online trust; product reviews;e-commerce;social inequality;culture;comparative study;China;US

Ramanayaka Mudiyanselage, AsangaAnalyzing vertical crustal deformation induced by hydrological loadings in the US using integrated Hadoop/GIS framework
Master of Science in Applied Geospatial Science, Bowling Green State University, 2018, Geology
Vertical crustal deformation for the contiguous US was assessed using continuous GPS stations for a total of 54 months. The study analyzed the correlation of vertical crustal deformation and hydrological loadings. Precipitation data were used as a measure of surface hydrological loadings. The relationship of GPS and precipitation data was studied by deriving Pearson correlation coefficients (r) for four different levels of watersheds (HUCs). To process the data for the temporal analysis, this study presents a prototype Hadoop/GIS framework which supports integrating distinct types of data. GPS data and precipitation data were analyzed by Hadoop and Hive which runs on the configured multi-node cluster. The spatial analysis used GIS tools to produce correlation maps. GRACE data which measure the terrestrial water storage were used to validate results. The generated correlation coefficients suggest that in the Northwestern US, the GPS deformation is negatively correlated with precipitation data. For instance, many watersheds in Washington and Oregon states produced high negative correlations (r) (between -0.55 and -0.75) which indicates the driving factor for vertical crustal deformation in the North-Western US is hydrological loadings which may have resulted from elastic loading processes. At the same time, GPS-GRACE correlation coefficients show a reasonable agreement with GPS-precipitation correlations for the North-Western US (r = - 0.67, r = - 0.69). However, the observed correlation coefficients for some of the watersheds in the Central Valley of California, Pennsylvania, and Maryland states had moderate positive values (r = 0.52, r = 0.42) which may have resulted from other factors such as climatic conditions, geological and geophysical effects.

Committee:

Peter Gorsevski, Ph.D. (Advisor); Yuning Fu, Ph.D. (Committee Member); Jeffrey Snyder, Ph.D. (Committee Member)

Subjects:

Geographic Information Science; Geography; Geophysics; Information Science; Information Technology

Keywords:

Geoinformatics; Crustal Deformation; Hydrological Loadings; Geospatial Analysis; GPS; Precipitation; GRACE; GIS; Hadoop; Hive; Bigdata Analytics

Mohd Faseeh, FnuProbabilistic Smart Terrain Algorithm
Master of Computing and Information Systems, Youngstown State University, 2016, Department of Computer Science and Information Systems
Smart Terrain is an algorithm that is used to find the object that meets the needs transmit signal to the non-player character with those needs influencing the character to move towards those objects. We describe how the probabilistic reasoning can be implemented on it deploying the object that it may meet a need with a given probability. The expected distance can be measured in terms of probability and distance that meet the needs, allowing the non-player character to follow the route in the game. This algorithm can be used to manage a character’s need to direct them which objective has a priority or which objectives are profitable to them. With a smart terrain, this algorithm defines how to find the goal in terms of probability and distance. This algorithm defines how the character is behaving as Human for making decisions. We implement the algorithm as a Unity 3D Game using Waypoint and Navigation Mesh where the objective is to find and collect some valuable objects and stay away from the guards guarding the objects, while navigating in a maze like game world. The algorithm finds a path based on the concept of adjacent routes in the game such that it makes it difficult for the player to stay away from the guards. The player on the other hand is controlled by the user. The algorithm searches for possible paths and then makes a decision based on the calculations on probabilities and distances as discussed in detail in the paper. Several features other than path finding such as ray casting and navigation mesh are also implemented to make the game feel life like. The guards behave intelligently and the algorithm changes the probabilities of player being in a particular area of the game world with time. This makes the game even tough to win.

Committee:

John Sullins, Ph.D. (Advisor); Alina Lazar, Ph.D. (Committee Member); Abdu Arslanyilmaz, Ph.D. (Committee Member)

Subjects:

Computer Science; Educational Technology; Information Technology; Web Studies

Keywords:

smart terrain, waypoints, navigation mesh, ray cast

Schafer, Sarah E.Technology Systems and Practices in Transportation and Logistics: Exploring the Links Toward Competitive Advantage in Supply Chains
Doctor of Philosophy, University of Toledo, 2015, Manufacturing and Technology Management
Higher demands for a variety of products add not only to the complexity of coordinating a supply chain, but also to the number of freight movements to support those demands. The increased demand for moving materials and goods contributes to higher levels of congestion and pollution during a time when businesses, customers and governments are increasingly concerned with reducing carbon footprints. To this end, new technologies and data capabilities are emerging that can add integrated visibility (monitoring and tracing), efficiency and even sustainability within the supply chain in order to mitigate these issues and cultivate an ever desired competitive advantage. Companies continuously look for innovative ways to evolve and compete within their dynamic environments. One untapped area that can provide a significant source of competitive advantage is within the complex supplier network and distribution channels; specifically, within the logistics and transportation functions. In an era of increasingly complex supplier network relationships, there is a growing need to connect and automate the extended supply chain between organizations. Applications of information technologies (IT) are seen as key enablers to mitigate these issues, yet widespread use is not evident between trade partners and transportation providers. Applications of IT enabled systems (i.e. intelligent transportation systems for freight and transportation management systems) and practices (i.e. integrated information sharing and third party provided supply chain and logistics managers) can be used to improve efficiencies, reliability, and reduce carbon effects of freight movements. Benefits derived from the movement of freight can, in turn, benefit the wider supply chain through faster response times and lower holding costs realized from reduced inventories. Drawing on contingency theory and organizational information processing theory, this research conceptualizes a model to study the relationships between the major constructs (1) External Environmental Pressures, (2) Internal Organizational Environment, (3) IT Enabled Systems and Practices, (4) Transportation Outcomes, and (5) Competitive Advantage of the Supply Chain. Examining transportation as the link between enterprises in the supply chain is not well understood. This work is expected to open a new area for examining the interfaces between organizations in order to improve overall performance for supply and distribution networks. The development of a reliable instrument to test these relationships will contribute to research and practice. Hypothesized relationships were tested through a combined statistical analysis of primary data collected from 260 transportation providers. By providing researchers with a better understanding of contextual factors that drive organizational technology adoption, it will become easier to identify factors of success for future innovative technology initiatives, particularly pertaining to the transportation and logistics industry. Moreover, managers are expected to find results from evaluating specific types of IT enabled systems and practices particularly useful as they provide metrics for evaluating investments in those systems and practices based on performance measures for transportation outcomes in efficiency, reliability, responsiveness, quality, carbon emissions reduction, and equipment utilization. Results indicate that some IT enabled systems and practices, mainly intelligent transportation systems for freight and integrated information sharing, do positively impact transportation outcomes. Other IT enabled systems and practices were found to have weak impacts (i.e. using a transportation management system) or non-significant relationships (i.e. using a third party provided supply chain and logistics manager). Implications for these findings are discussed. Finally, results indicate a strong relationship between positive transportation outcomes and the competitive advantage of the supply chain network. Thus indicating the importance of utilizing transportation providers to differentiate service offerings and build a competitive advantage for the supply chain. Contributions to research and implications of these results for practice are discussed.

Committee:

Mark Vonderembse, Ph.D. (Committee Chair); Peter Lindquist, Ph.D. (Committee Member); Thomas Sharkey, Ph.D. (Committee Member); P. Sundararaghavan, Ph.D. (Committee Member)

Subjects:

Business Administration; Information Technology; Transportation

Keywords:

Supply chain management, transportation, logistics, information technology applications

Sinha, VinayakSentiment Analysis On Java Source Code In Large Software Repositories
Master of Computing and Information Systems, Youngstown State University, 2016, Department of Computer Science and Information Systems
While developers are writing code to accomplish the task assigned to them, their sentiments play a vital role and have a massive impact on quality and productivity. Sentiments can have either a positive or a negative impact on the tasks being performed by developers. This thesis presents an analysis of developer commit logs for GitHub projects. In particular, developer sentiment in commits is analyzed across 28,466 projects within a seven-year time frame. We use the Boa infrastructure’s online query system to generate commit logs as well as files that were changed during the commit. Two existing sentiment analysis frameworks (SentiStrength and NLTK) are used for sentiment extraction. We analyze the commits in three categories: large, medium, and small based on the number of commits using sentiment analysis tools. In addition, we also group the data based on the day of week the commit was made and map the sentiment to the file change history to determine if there was any correlation. Although a majority of the sentiment was neutral, the negative sentiment was about 10% more than the positive sentiment overall. Tuesdays seem to have the most negative sentiment overall. In addition, we do find a strong correlation between the number of files changed and the sentiment expressed by the commits the files were part of. It was also observed that SentiStrength and NLTK show consistent results and similar trends. Future work and implications of these results are discussed.

Committee:

Bonita Sharif, PhD (Advisor); Alina Lazar, PhD (Committee Member); John Sullins, PhD (Committee Member)

Subjects:

Computer Science; Information Technology; Organizational Behavior

Keywords:

Sentiment Analysis; Emotions; Commit logs; Java projects; Large Software Repositories

Albahli, Saleh MohammadOntology-based approaches to improve RDF Triple Store
PHD, Kent State University, 2016, College of Arts and Sciences / Department of Computer Science
The World Wide Web enables an easy, instant access to a huge quantity of information. Over the last few decades, a number of improvements have been achieved that helped the web reach its current state. However, the current Internet links documents together without understanding them, and thus, makes the content of web only human-readable rather than machine-understandable. Therefore, there is a growing need for an efficient web to make information machine understandable rather than only machine processable to reach to the web of knowledge. To cure this problem, the Semantic Web or what is called “web of meaning” tries to shift the thinking of published data in the form of web pages to allow machines to understand the contents. That is, computers are able to interoperate and think on our behalf, opening up several different perspectives. However, with the increasing quantity of semantic data, there is a need for efficient and scalable performance from semantic repositories which store and from which must be retrieving a large datasets contain Resource Description Framework -RDF- triples. This is a major obstacle to reaching the goal of the Semantic Web, and this problem is magnified by the unpredictable nature of the data encoded in RDF. Additionally, current RDF stores, in general, scale poorly, which may exacerbate the performance behavior for querying and retrieving RDF triples. As a consequence, we proposed new semantic storage models for managing RDF data in relational databases to show how a state-of-the-art scaling method can be improved with ontology-based techniques for speed and high scalability.

Committee:

Austin Melton (Committee Chair); Angela Guercio (Committee Member); Ye Zhao (Committee Member); Alan Brandyberry (Committee Member); Mark Lewis (Committee Member)

Subjects:

Computer Science; Information Technology

Keywords:

Semantic Web, RDF data management, Triple Store, Ontology, FCA, Relational Database

Church, Donald GlenReducing Error Rates in Intelligence, Surveillance, and Reconnaissance (ISR) Anomaly Detection via Information Presentation Optimization
Master of Science in Industrial and Human Factors Engineering (MSIHE) , Wright State University, 2015, Industrial and Human Factors Engineering
In the ISR domain, time-critical decision-making and dealing with multiple information feeds places high demands on the human. When designing aids and tools, the decision maker must be taken into account. This research looks toward designing a decision aid based the personality type of the operator. The BFI is used to determine the impact of personality and decision aid type (graphical vs. textual) on performance. Results show Openness and Agreeableness to be the strongest single factors for decision aid impact on performance. A model was also developed to show how the human takes the information and relates it to a mental model for use in making an identification. This can assist the ISR community in developing an adaptive aiding system to reduce the cycle time in the decision making process and have the greatest impact on performance.

Committee:

Mary Fendley, Ph.D. (Advisor); Richard Warren, Ph.D. (Committee Member); Pratik Parikh, Ph.D. (Committee Member)

Subjects:

Engineering; Industrial Engineering; Information Technology; Personality Psychology

Keywords:

Personality; BFI; Big Five; ISR; Intelligence; Surveillance; Reconnaissance; SDT; Signal detection; visual aid; graphical aid; textual aid; interface design; perception; cognitive fit; perception model; information processing; human factors

AYDAR, MEHMETDeveloping a Semantic Framework for Healthcare Information Interoperability
PHD, Kent State University, 2015, College of Arts and Sciences / Department of Computer Science
Interoperability in healthcare is stated as the ability of health information systems to work together within and across organizational boundaries in order to advance the effective delivery of healthcare for individuals and communities. The current healthcare information technology environment breeds incredibly complex data ecosystems. In many cases pertinent patient records are collected in multiple systems, often supplied by competing manufacturers with diverse data formats. This causes inefficiencies in data interoperability, as different formats of data create barriers in exchanging health information. This dissertation presents a semantic framework for healthcare information interoperability. We propose a system for translation of healthcare instance data, based on structured mapping definitions and using RDF as a common information representation to achieve semantic interoperability between different data models. Moreover, we introduce an entity similarity metric that utilizes the Jaccard index with the common relations of the data entities and common string literal words referenced by the data entities and augmented with data entity neighbors similarity. The precision of the similarity metric is enhanced by incorporating the auto-generated importance weights of the entity descriptors in the RDF representation of the dataset. Furthermore, we provide an automatic classification method, which we call summary graph generation, based on the pairwise entity similarities, and we propose that the summary graph can further be utilized for interoperability purposes. Finally, we present a suggestion based semi-automatic instance matching system and we test it on the RDF representation of a healthcare dataset. The system utilizes the entity similarity metric, and it presents similar node pairs to the user for possible instance matching. Based on the user feedback, it merges the matched nodes and suggests more matching pairs depending on the common relations and neighbors of the already matched nodes. We propose that the instance matching technique could be leveraged for mapping between separate data models.

Committee:

Austin Melton (Advisor); Angela Guercio (Committee Member); Ye Zhao (Committee Member); Alan Brandyberry (Committee Member); Helen Piontkivska (Committee Member); Javed I. Khan (Committee Chair); James L. Blank (Other)

Subjects:

Computer Science; Health Care; Health Sciences; Information Systems; Information Technology; Medicine

Keywords:

Healthcare Information Interoperability;Semantic Web;RDF;Translation of Instance Data;Summary Graph;RDF Instance Match;RDF Entity Similarity;Automatic Mapping;Information Translation

Lipkin, IlyaTesting Software Development Project Productivity Model
Doctor of Philosophy in Manufacturing and Technology Management, University of Toledo, 2011, Manufacturing and Technology Management

Software development is an increasingly influential factor in today’s business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted.

There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis.

This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD.

Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers.

Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc¿¿¿ This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

Committee:

Jeen Su Lim (Committee Chair); James Pope (Committee Member); Michael Mallin (Committee Member); Michael Jakobson (Committee Member); Wilson Rosa (Advisor)

Subjects:

Aerospace Engineering; Armed Forces; Artificial Intelligence; Business Administration; Business Costs; Computer Engineering; Computer Science; Economic Theory; Economics; Electrical Engineering; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology; Management; Marketing; Mathematics

Keywords:

"Software Estimation"; "Software Cost Model"; "Department of Defense Data"; COCOMO; "Software Project Productivity Model"

Chen, WeiDeveloping a Framework for Geographic Question Answering Systems Using GIS, Natural Language Processing, Machine Learning, and Ontologies
Doctor of Philosophy, The Ohio State University, 2014, Geography
Geographic question answering (QA) systems can be used to help make geographic knowledge accessible by directly giving answers to natural language questions. In this dissertation, a geographic question answering (GeoQA) framework is proposed by incorporating techniques from natural language processing, machine learning, ontological reasoning and geographic information system (GIS). We demonstrate that GIS functions provide valuable rule-based knowledge, which may not be available elsewhere, for answering geographic questions. Ontologies of space are developed to interpret the meaning of linguistic spatial terms which are later mapped to components of a query in a GIS; these ontologies are shown to be indispensable during each step of question analysis. A customized classifier based on dynamic programming and a voting algorithm is also developed to classify questions into answerable categories. To prepare a set of geographic questions, we conducted a human survey and generalized four categories that have the most questions for experiments. These categories were later used to train a classifier to classify new questions. Classified natural language questions are converted to spatial SQLs to retrieve data from relational databases. Consequently, our demo system is able to give exact answers to four categories of geographic questions within an average time of two seconds. The system has been evaluated using classical machine learning-based measures and achieved an overall accuracy of 90% on test data. Results show that spatial ontologies and GIS are critical for extending the capabilities of a GeoQA system. Spatial reasoning of GIS makes it a powerful analytical engine to answer geographic questions through spatial data modeling and analysis.

Committee:

Eric Fosler-Lussier, Dr. (Committee Member); Rajiv Ramnath, Dr. (Committee Member); Daniel Sui, Dr. (Committee Member); Ningchuan Xiao, Dr. (Committee Chair)

Subjects:

Cognitive Psychology; Computer Science; Geographic Information Science; Geography; Information Science; Information Systems; Information Technology; Language

Keywords:

geographic information system; GeoQA; geographic question answering framework; geolinguistics; spatial ontologies;

MacRobbie, Danielle ElizabethAn Investigation of Technological Impressions in Steve Reich and Beryl Korot's Three Tales
Master of Music (MM), Bowling Green State University, 2013, Music History
The impact of technology upon the twentieth century and the influence it continues to exert upon the present human community is self-evident. The allure and power of technology are broadcast via the grandest media and performance entertainment, while on the opposite spectrum, technology is being continually refined to render its electro-mechanical or bio-technical feats for humans. It is this theme of the increasing growth and import of technology upon every facet of human life that serves as the subject of Three Tales, a twenty-first century documentary digital video opera by composer Steve Reich and video artist Beryl Korot. In this work, Reich and Korot confront society's negligence of particular directions that technological development and application have undergone in the past century, and advise against taking the same paths in the coming era. Even as modern technology is critiqued in Three Tales, the work itself bends to accept the reality of technology's significance upon modern thought and life. In keeping with Reich and Korot's categorization of the work as a "documentary digital video opera," Three Tales is a performance work heavily dependent upon technology for its generation, presentation, and discussion of the interchange between technology and humankind. This thesis will investigate how technology has shaped the course of an artwork whose purpose is to expose and debate the handling of technology in current society. Technology in Three Tales is examined from various perspectives. Chapter one presents the foundational role of technology as "tool," "subject," and "theme." Chapter two considers how visual and audio technologies are used in Three Tales to suggest the effects technology may have upon perceptions of human connectedness and isolation. Chapter three investigates the inherent paradox in Three Tales that occurs from using technological devices for the work's production while its theme critiques modern, technological advances. The chapter also considers the influence technology has upon the formation of Three Tales's generic identification.

Committee:

Eftychia Papanikolaou (Advisor); Alexa Woloshyn (Committee Member); Mary Natvig (Committee Member)

Subjects:

Biology; Ethics; History; Information Technology; Medical Ethics; Military History; Minority and Ethnic Groups; Music; Nanotechnology; Robotics; Robots; Spirituality; Technology; Theology

Keywords:

Steve Reich; Beryl Korot; Three Tales; Technology; Hindenburg zeppelin; Bikini Atoll; Cloning; electronic music; IRCAM; freeze frame sound; new music theater; Kismet; human connectedness; human isolation; technology and art; art and politics; paradox

Prempeh, James AgyemanDynamic Culture-Centered Design for User Empowerment, with Applications to Techno-Culture in Ghana
Master of Technical and Scientific Communication, Miami University, 2011, English
This paper explores why and how dynamic approaches to Culture-Centered Design can help designers conceive of, and develop, technologies effective at empowering users in specific cultural contexts. In the context of developments in the objective and theories of Culture-Centered Design, I explicate dynamic approaches as those that recognize the dynamic nature of cultural context, the socio-cultural meaning of technologies, and user activity with technology. To illustrate their relevance, these approaches are then applied to the techno-culture of Ghana—as representative of technology challenges and opportunities in Africa—to generate ideas regarding how Ghanaians could be better empowered with effective information technologies.

Committee:

Huatong Sun, PhD (Committee Co-Chair); James Coyle, PhD (Committee Co-Chair); Jean Lutz, PhD (Committee Member)

Subjects:

Cultural Anthropology; Design; Information Technology; Technical Communication; Technology

Keywords:

Culture-Centered Design; Cross-Cultural Design; Internationalization; Ghana; Africa; Information Technology; Usability

Beam, Michael A.Personalized News: How Filters Shape Online News Reading Behavior
Doctor of Philosophy, The Ohio State University, 2011, Communication

The evolution and diffusion of communication technology has consistently changed interactions between members of the public sphere in forming public opinion. Some democratic scholars have worried recent developments in personalization technologies will degrade public opinion formation. They worry that personalized news allows citizens to only pay attention to news coming from their preferred political perspective and may isolate them from challenging perspectives. Empirical research has shown people with access to more highly selective information technology demonstrate increases in both selectivity and incidental exposure to diverse perspectives.

This dissertation focuses on these behavioral and attitudinal outcomes of using personalized news technologies. Dual-processing theories of information provide the foundation for analyzing opinion formation within the bounded rationality model of public opinion. Personalized news technologies are hypothesized to increase the amount of news exposure and elaboration through increased personal relevance.

Two studies test these broad hypotheses. First, results from a national random sample of adults show users of personalized web portals are more likely to engage in increased news viewing both online and offline. No differences in preference for perspective sharing or challenging sources of news is found between personalized portal users and non-users. Next, results from an online experiment of Ohio adult Internet users show an increase in time spent reading news articles in personalized news portals compared with a generic portal. An interaction between using customized news portals with source recommendations based off of explicit user preferences and increased time spent reading per news article is found on news elaboration. No differences in news elaboration are found in other personalized news designs including implicitly recommended news sources based on user profile information and only showing users recommended stories. The implications of these results are discussed in terms of the public opinion debate about new communication technologies, selective exposure research, information processing research, and personalized information system design.

Committee:

Gerald M. Kosicki, PhD (Advisor); David R. Ewoldsen, PhD (Committee Member); R. Kelly Garrett, PhD (Committee Member); Andrew F. Hayes, PhD (Committee Member)

Subjects:

Behavioral Sciences; Behaviorial Sciences; Communication; Experiments; Information Systems; Information Technology; Journalism; Mass Communications; Political Science

Keywords:

Internet; personalized; personalization; news; public opinion; politics; election; selective exposure; information processing; portal; web; communication; elaboration

Heberling, Rachel ElaineObsolete Communication: An Apparition of the Disembodied Hand and Voice
Master of Fine Arts, The Ohio State University, 2011, Art

I propose that we question our suspension of disbelief in email, cell phones, and daily communication devices. As messages are sent and received through wireless signals, invisible words and voices materialize out of the sky, becoming normal by rote familiarity. The presets and automatic corrections of mobile technology (made for the operator by the machine) cause us to take most communication for granted, so that we have become less aware of our technological extensions in the role of thought.

I wish to remove a layer of this familiarity by re-introducing objects from a past era. When taken out of contemporary context by using obsolete, analog devices (that operate at a slower pace with a much more burdensome interface), this eerie integration of hand, voice and machine becomes much more discernable. Older devices are a means to pause and think about how we are still doing the same things, such as typing and sending messages from one machine to the other, but we simply cannot see what happens between them.

In order to explore and make visible these hidden aspects of technology, I have been representing and interacting with communication devices in my artwork through drawing, printmaking, video and performance. In order to signify the disembodied hand and voice, I have drawn dials, buttons, lenses and telephone receivers, disconnected and partially veiled from the operating hand. I have also created a device of impossible communication, to be interacted with and experienced by any willing participants. As early technology exaggerates a now absent modeling after human forms (oddly and appropriately detached), these works evoke a failed connection and represent an absence and yet a presence of the hand and voice.

Committee:

Charles Massey, Jr. (Advisor); Suzanne Silver (Committee Member); Sergio Soave (Committee Member); Mary Jo Bole (Committee Member)

Subjects:

Communication; Fine Arts; History; Information Technology; Performing Arts; Technology

Keywords:

printmaking; typewriters; telephones; obsolete; analog; communication technology; drawing; video

Gadapa, ShaliniAssessing SeeIT 3D, A Software Visualization Tool
Master of Computing and Information Systems, Youngstown State University, 2012, Department of Computer Science and Information Systems
Software is inherently complex. This is especially true for large open-source systems. Over the past two decades there has been a number of software visualization tools proposed in the literature. The main idea behind creating a software visualization tool is to help a developer or maintainer comprehend the system at different levels of abstraction. Most of the tools have focused on creating elaborate and pretty looking visualizations. There have not been many cases where a tool is systematically empirically validated to make sure that it is really useful to a developer. This thesis tries to bridge this gap between the tool and its empirical validation by assessing one such software visualization tool, SeeIT 3D. Sixteen different tasks are developed in the context of understanding an open-source system, JFreeChart, written in Java. Ten subjects were recruited and an observational study was performed. The main goal was to determine the effectiveness of SeeIT 3D while performing typical software tasks when using the visualization within the Eclipse IDE. Results and observations are presented. These results will be provided as feedback to the tool developers, who may use it in further improving SeeIT 3D.

Committee:

Bonita Sharif, PhD (Advisor); John Sullins, PhD (Committee Member); Yong Zhang, PhD (Committee Member)

Subjects:

Computer Science; Information Systems; Information Technology

Keywords:

visualization tool; visualizing JFreeChart in SeeIT 3D; SeeIT 3D metaphor; polycylinders (visual type relations)

Garcia, Michael ErikThe Economics of Data Breach: Asymmetric Information and Policy Interventions
Doctor of Philosophy, The Ohio State University, 2013, Agricultural, Environmental and Developmental Economics
Large public and private costs result from attacks on firms’ information technology networks. Successful attacks result in data breaches with private damages from business interruption, reputation, and investigation forensics. Social losses result from exposing individuals’ personal information, leading to state, national, and international policymakers enacting legislation to manage these costs. Inadequate economic modeling exists to analyze this phenomenon, despite the large economic impact of cyberspace, e-commerce, and social networking. This research advances information security economics by deviating from a firm-level model to focus on the social welfare implications of firm and regulator decisions. I comprehensively review the economic and policy environment and develop the first rigorous economic model of regulatory approaches to data breach. I develop a one-period model of information security and analyze the efficacy of regulatory interventions in the face of asymmetric information. The model builds upon existing models of firm and firm-consumer information security investment and draws analogy between information security and managing asymmetric information in the biosecurity and livestock disease literature. I analyze firm and social planner incentives in a non-regulatory environment and three regulatory environments. Without regulation, the firm underinvests in network and data protection relative to the social optimum. In the first regime, the regulator must expend a fixed cost to observe social losses and overcome the firm’s moral hazard. The interaction between network and data protection permits the regulator to induce optimal behavior in two investment decisions with a single regulatory instrument. With sufficiently low regulatory costs, this result is socially preferred. In the second regulatory regime, the regulator must expend the same fixed cost for imperfect observation of social losses and administer a program requiring that the firm report breaches. The regulator can induce reporting with a sufficiently large fine for non-reporting, even with imperfect breach monitoring. In this regime, a disclosure investigation cost distorts the firm’s investment incentives in a manner inconsistent with social objectives, resulting in increased network protection at the expense of data protection. With a sufficiently high disclosure investigation cost, the firm will invest less in data protection than it would in lieu of regulation. The final regime introduces a data protection technology that mitigates social loss and some private damages. The regulator expends the same fixed cost for imperfect observation of social losses and requires disclosure only if the firm does not invest in the safe harbor technology. Except when very costly, this safe harbor technology allows the regulator to induce optimal investment and lower the firm’s regulatory burden. The safe harbor technology results in welfare gains except when the technology is very costly, at which point the firm may exit, or the safe harbor regime defaults to the distorted incentives of the disclosure policy. This research advances economic modeling in the relatively undeveloped field of information security economics. As policy aspects of information security become more developed, policymakers will require better tools to analyze policy impacts on both the firm’s wealth and on social welfare. This research provides a step toward those improved tools.

Committee:

Brian Roe, Ph.D. (Advisor); Sathya Gopalakrishnan, Ph.D. (Committee Member); Ian Sheldon, Ph.D. (Committee Member)

Subjects:

Economics; Information Technology

Keywords:

cybersecurity; cyber security; data breach; economics; data breach notification; information security; information security economics

Kanaparthi, Pradeep KumarDetection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles
Master of Science, University of Toledo, 2012, Electrical Engineering

The aim of this thesis is to develop and implement an algorithm that automatically detects and recognizes U.S. speed signs, from the grayscale images captured by a camera mounted on the interior mirror of a vehicle, as a part of designing smarter vehicles. The system operates in real-time within the computational limits of contemporary embedded general purpose processors. This system will assist the driver by providing the necessary information, regarding the assigned speed limits, right in front of him and provide additional safety measures by monitoring the vehicle’s speed.

The proposed method consists of two phases in it: a detection phase, in which all the possible speed signs in the input image are detected first, and a recognition phase, in which the detected regions are recognized and the information regarding the speed limits is extracted from them. The detection phase utilizes the region characteristics, such as aspect ratio and size, to hypothesize the speed sign locations in the input image. We have utilized the idea of connected component labeling technique and adapted it for the grayscale images, to divide the input image into a set of regions. The recognition phase calculates the invariant features of the inner parts of the detected regions using Hu’s moments. It verifies the hypothesis first, before extracting the assigned speed limit from the detected region using a feed forward neural network. The proposed method was experimented on a number of traffic images and the results show that the region characteristics are more immune to different noisy conditions such as partial occlusions, cluttered backgrounds and deformations.

Committee:

Ezzatollah Salari (Advisor); Kim Junghwan (Committee Member); Jackson Carvalho (Committee Member); Ezzatollah Salari (Committee Chair)

Subjects:

Engineering; Information Technology; Technology

Keywords:

Speed sign; connected component labeling; regions; optical character recognition; neural network.

Mathur, KushMathematical Models and Genetic Algorithm Approaches to Simultaneously Perform Workforce Overtime Capacity Planning and Schedule Cells
Master of Science (MS), Ohio University, 2012, Industrial and Systems Engineering (Engineering and Technology)
The problem studied in this thesis was observed in an actual textile company. The problem is more complex than usual scheduling problems in that we compute overtime requirements and make scheduling decisions simultaneously. Since having tardy jobs is not desirable, overtime work is allowed to minimize the number tardy jobs or total tardiness. Two different problems are considered; Problem1, to maximize the total profits by delivering jobs on or before time. The tardy jobs in this case are considered as lost sales. Problem2, to minimize the total tardiness and overtime costs. In this case tardy jobs are delivered with associated tardiness penalty costs. In problem1, various mathematical models are presented reflecting different overtime workforce hiring practices. To solve the same problem for one particular hiring policy, a Genetic Algorithm (GA) approach is also discussed. GA includes some newly proposed mutation operators, dynamic and twin. The proposed twin mutation strategy produced the best results in all problem sizes. Mathematical Model 2 was the best mathematical model with respect to both profit and execution time. This model considered partial overtime periods and also allowed different overtime periods on cells. In problem2, a mathematical model is presented to solve this complex problem. Experimentation has been carried out using three different problem types with five instances each based on the data collected from the company. For most problems, the mathematical model gave results in seconds.

Committee:

Gursel Suer, PhD (Advisor); Dusan Sormaz, PhD (Committee Member); Tao Yuan, PhD (Committee Member); Faizul Huq, PhD (Committee Member)

Subjects:

Applied Mathematics; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology

Keywords:

Scheduling; Genetic Algorithm; Mathematical Model; decision making

Bender, Patricia LynnImplementation of a Parent-Generated Electronic Family Health History Tool in an Urban Pediatric Primary Care Setting
Doctor of Nursing Practice Degree Program in Population Health Leadership DNP, Xavier University, 2018, Nursing
As the United States (U.S.) health care system moves towards a health promotion model, identifying those at risk for common health conditions is crucial. Comprehensive family health history (FHH) data collection and analysis has been proposed as a low cost, highly efficient and effective way to screen for common health conditions. However, patients' electronic health records (EHRs) currently do not contain enough FHH information to adequately assess for health risks. The purpose of this DNP Scholarly project was to implement a parent completed electronic family health history (eFHH) tool in a socially disadvantage, pediatric population receiving care in an urban primary care clinic. A descriptive observation study design was used to evaluate parents' use of My Family Health Portrait (MFHP), an eFHH tool. Forty parent participants were observed for ease of MFHP use to determine the feasibility of using a parent completed FHH tool. The majority of parents (85%) were able to complete the MFHP tool prior to completing provider evaluations, with 70% of parents completing a four generation family history assessment using MFHP. Facilitators for completion included: desire to enter their own information, perceived positive benefit, ease of use, internet access and enjoyed entering information. Barriers to completing the MFHP tool were: program was not intuitive, issues with unknown information, clinic interruptions, complexity of health categories, and the tool is not pediatric focused. Results support the possibility of using a parent-generated electronic family health history tool in a pediatric care setting.

Committee:

Elizabeth Bragg, PhD (Committee Chair); Kelly Bohnhoff, PhD (Committee Member)

Subjects:

Genetics; Health Care; Information Technology

Keywords:

Family Health History, Risk Assessment, Computer-generated tool, Primary Care Providers, My Family Health Portrait, Family History, pedigree, electronic family health history

Young, William F1:1 Laptops in Education and Achievement Test Results in One Rural High School
Doctor of Education (Educational Leadership), Youngstown State University, 2017, Department of Counseling, School Psychology and Educational Leadership
The purpose of the study was to explore the relationship between a 1:1 laptop program and the achievement test results for the Ohio Graduation Tests (OGT). Two cohorts were examined (N=193): 1. Tenth graders who took the OGT subtests in Reading, Writing, Math, Science, and Social Studies in 2014 (n=109) and who had received traditional instruction and 2. Tenth graders who were given individual laptops and eTexts to use at school and at home, and who took the same OGT tests in 2015 (n=84). A Chi Square statistical assessment was conducted to compare student performance. No statistical difference was evident for overall passage rates when comparing the two cohorts. For the laptop cohort, there was no statistical difference in the expected counts for the subject areas of Writing, Science, and Social Studies. For Reading, laptop cohort scores reflect a trend, with scores moving upward into the Accelerated performance category. Math scores showed significantly more scores falling in the highest performance category of Advanced in comparison to what was expected. Similarly, when looking at the economically disadvantaged subgroup within the laptop cohort (n=29), a positive and significant difference from what was expected occurred within the Advanced category for Math, while a trend toward significance for improved performance occurred for Reading scores. The potential for significant gains in student achievement is evident. Additional longitudinal research is warranted to better understand the significance of impact as pedagogical practices develop following initial implementation and considering contextual factors.

Committee:

Jane Beese, Ed.D. (Committee Chair); Charles Vergon, J.D. (Committee Member); Karen Giorgetti, Ph.D. (Committee Member); I-Chun Tsai, Ph.D. (Committee Member)

Subjects:

Educational Leadership; Educational Technology; Educational Tests and Measurements; Information Technology; Mathematics Education; School Administration; Technology

Keywords:

1 to 1 laptops; laptops in education; laptops and achievement tests; ubiquitous technologies; technology and education; laptops; achievement tests; rural schools; change in education; 1 to 1 technology, one to one; laptop programs; mathematics education

Next Page