Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 1)

Mini-Tools

 
 

Search Report

  • 1. Rahman, Md Low Power Based Cognitive Domain Ontology Solving Approaches

    Doctor of Philosophy (Ph.D.), University of Dayton, 2021, Electrical Engineering

    The demand for autonomous systems is increasing in multiple domains, including mobile systems (UAVs, cars, and robots) and planning systems, as it improves the performance of the systems beyond human capabilities. In autonomous systems, agents mine a massively large knowledge database to make intelligent and optimal decisions in run-time. Knowledge mining and decision-making are cast as constraint satisfaction problems (CSP), where solutions are generated by satisfying a number of constraints from the domain. CSPs have become a point of interest because of their affiliation with both artificial intelligence and operations research. From resource allocation and automated decision-making to gaming, constraint satisfaction problems are widely noticeable. An autonomous system achieves its autonomy by solving these problems using CSP solving approaches, including Boolean satisfiability, satisfiability modulo theories, answer set programming. Autonomy is the degree of acquired autonomous capability. Within the Air Force, autonomy is defined as the ability to select the required course of action (COA) to achieve higher objectives. The Cognitively Enhanced Complex Event Processing (CECEP) framework being developed at the US Air Force is an autonomous decision support tool that enables enhanced agent-based decision making. CECEP enables the autonomous system to process complex real-world events and select the required course of action to achieve optimal results. CECEP is capable of representing and processing declarative, procedural, and domain-specific knowledge to deal with all forms of real-world events. CECEP also incorporates several task independent knowledge processing frameworks to perform as a generic problem-solving framework. CECEP's problem-solving capability makes it a universal complex event processing framework that can be utilized in both military and civilian domains. CECEP captures its domain knowledge in a cognitive domain ontology (CDO), storing it (open full item for complete abstract)

    Committee: Tarek Taha (Committee Chair) Subjects: Electrical Engineering
  • 2. Atahary, Tanvir Acceleration of Cognitive Domain Ontologies

    Doctor of Philosophy (Ph.D.), University of Dayton, 2016, Electrical Engineering

    This thesis examined several acceleration efforts of knowledge mining from Cognitive Domain Ontologies (CDOs), which is a knowledge repository in the Cognitively Enhanced Complex Event Processing (CECEP) architecture. The CECEP architecture was developed at US Air force research laboratory. This is an autonomous decision support tool that reasons and learns like a human and enables enhanced agent-based decision-making. This architecture has applications in both military and civilian domains. Real time agents require massively linked knowledge databases to be searched using a large set of constraints to generate intelligent decisions in run time. One of the most computationally challenging aspects of CECEP is mining the domain knowledge captured in CDOs. The CDO mining process employed in the CECEP architecture is cast as a constraint-satisfaction problem (CSP). It falls into the category of NP-complete problems, which are very likely to require massive computing to solve. Even a small instance of an NP-complete problem in some cases could take years of computing to solve. Typically searching is a ubiquitous procedure to solve CSP problems, but sometimes constraint consistency is good enough to find a valid solution without performing a search. This thesis explored several CSP algorithms and deployed two different algorithms on heterogeneous hardware platform in order to mine CDOs. We initially examined the Exhaustive depth first search (EDFS) algorithm on a cluster of GPGPUs and Intel Xeon Phi co-processors. We achieved around 100 times speedup on a GPGPU compare to single CPU. Since the search space grows exponentially with the EDFS algorithm, this study explored an intelligent search algorithm that can prune the search space according to the constraints. We modified the conventional Forward Checking (FC) algorithm and introduced a novel path-based forward checking algorithm to mine CDOs and compared with a commonly utilized CSP solver. Conventional singl (open full item for complete abstract)

    Committee: Tarek Taha Dr. (Committee Chair); Vijayan Asari Dr. (Committee Member); Eric Balster Dr. (Committee Member); Scott Douglass Dr. (Committee Member) Subjects: Cognitive Psychology; Computer Engineering; Computer Science; Electrical Engineering
  • 3. Shankar, Arunprasath ONTOLOGY-DRIVEN SEMI-SUPERVISED MODEL FOR CONCEPTUAL ANALYSIS OF DESIGN SPECIFICATIONS

    Master of Sciences (Engineering), Case Western Reserve University, 2014, EECS - Computer Engineering

    The integration of reusable IP blocks/cores is a common process in system-on-chip design and involves manually comparing/mapping IP specifications against system requirements. The informal nature of specification limits its automatic analysis. Ex- isting techniques fail to utilize the underlying conceptual information embedded in specifications. In this thesis, we present a methodology for specification analysis, which involves concept mining of specifications to generate domain ontologies. We employ a semi-supervised model with semantic analysis capability to create a col- laborative framework for cumulative knowledge acquisition. Our system then uses the generated ontologies to perform component retrieval and spec comparisons. We demonstrate our approach by evaluating several IP specifications.

    Committee: Christos Papachristou (Advisor) Subjects: Computer Engineering; Computer Science; Information Systems; Systems Design
  • 4. Ramakrishnan, Cartic Extracting, Representing and Mining Semantic Metadata from Text: Facilitating Knowledge Discovery in Biomedicine

    Doctor of Philosophy (PhD), Wright State University, 2008, Computer Science and Engineering PhD

    The information access paradigm offered by most contemporary text information systems is a search-and-sift paradigm where users have to manually glean and aggregate relevant information from the large number of documents that are typically returned in response to keyword queries. Expecting the users to glean and aggregate information has lead to several inadequacies in these information systems. Owing to the size of many text databases, search-and-sift is a very tedious often requiring repeated keyword searches refining or generalizing queries terms. A more serious limitation arises from the lack of automated mechanisms to aggregate content across different documents to discover new knowledge. This dissertation focuses on processing text to assign semantic interpretations to its content (extracting Semantic metadata) and the design of algorithms and heuristics to utilize the extracted semantic metadata to support knowledge discovery operations over text content. Contributions in extracting semantic metadata in this dissertation cover the extraction of compound entities and complex relationships connecting entities. Extraction results are represented using a standard Semantic Web representation language (RDF) and are manually evaluated for accuracy. Knowledge discovery algorithms presented herein operate on RDF data. To further improve access mechanisms to text content, applications supporting semantic browsing and semantic search of text are presented.

    Committee: Amit Sheth PhD (Advisor); Michael Raymer PhD (Committee Member); Shaojun Wang PhD (Committee Member); Guozhu Dong PhD (Committee Member); Thaddeaus Tarpey PhD (Committee Member); Vasant Honavar PhD (Committee Member) Subjects: Computer Science
  • 5. Mo, Dengyao Robust and Efficient Feature Selection for High-Dimensional Datasets

    PhD, University of Cincinnati, 2011, Engineering and Applied Science: Mechanical Engineering

    Feature selection is an active research topic in the community of machine learning and knowledge discovery in databases (KDD). It contributes to making the data mining model more comprehensible to domain experts, improving the prediction performance and robustness of the model, and reducing model training. This dissertation aims to provide solutions to three issues that are overlooked by many current feature selection researchers. These issues are feature interaction, data imbalance, and multiple subsets of features. Most of extant filter feature selection methods are pair-wise comparison methods which test each pair of variables, i.e., one predictor variable and the response variable, and provide a correlation measure for each feature associated with the response variable. Such methods cannot take into account feature interactions. Data imbalance is another issue in feature selection. Without considering data imbalance, the features selected will be biased towards the majority class. In high dimensional datasets with sparse data samples, there will be many different feature sets that are highly correlated with the output. Domain experts usually expect us to identify multiple feature sets for them so that they can evaluate them based on their domain knowledge. This dissertation aims to solve these three issues based on a criterion called minimum expected cost of misclassification (MECM). MECM is a model independent evaluation measure. It evaluates the classification power of the tested feature subset as a whole. MECM has adjustable weights to deal with imbalanced datasets. A number of case studies showed that MECM had some favorable properties for searching a compact subset of interacting features. In addition, an algorithm and corresponding data structure were developed to produce multiple feature subsets. The success of this research will have broad applications ranging from engineering, business, to bioinformatics, such as credit card fraud detection, email f (open full item for complete abstract)

    Committee: Hongdao Huang PhD (Committee Chair); Sundararaman Anand PhD (Committee Member); Jaroslaw Meller PhD (Committee Member); David Thompson PhD (Committee Member); Michael Wagner PhD (Committee Member) Subjects: Information Systems
  • 6. HE, AIJING UNSUPERVISED DATA MINING BY RECURSIVE PARTITIONING

    MS, University of Cincinnati, 2002, Engineering : Computer Science

    In this thesis, an experimental investigation into unsupervised database mining was conducted. A novel paradigm for autonomous mining proposed by Dr. L. J. Mazlack was tested. The idea states that increasing coherence will increase conceptual information; and this in turn will reveal previously unrecognized, useful and implicit information. [Mazlack,1996] In the experiments, different partitioning heuristics were tested: arbitrary partition, balanced partition and imbalanced partition. Their usefulness and differences in result are discussed in this thesis. To assist our partitioning heuristics, a rough set based model called Total Roughness was designed to measure the crispness of a partition. This model was used in our experiments to help choose partitioning attribute as well as perform non-scalar data clustering. The feasibility of integrating rough set theory in unsupervised partitioning is evaluated and addressed in this thesis.

    Committee: Dr. Lawrence J. Mazlack (Advisor) Subjects: Computer Science
  • 7. Goyder, Matthew Knowledge Accelerated Algorithms and the Knowledge Cache

    Master of Science, The Ohio State University, 2012, Computer Science and Engineering

    Knowledge discovery through data mining is the process of automatically extract- ing actionable information from data, that is, the information or knowledge found within data which provides insight beyond that which may be found by observing the cardinal state of the data itself. This process is human driven; there is always a human at the core. Knowledge discovery is inherently iterative, a human discovers information by posing questions to a data mining system, which in turn provides answers. New questions are developed upon receipt of these answers and these new questions are asked. Clearly these answers need to be provided in as timely a fashion as possible in order for the human at the core to form ideas and solidify hypotheses. Unfortunately many questions take too long to be answered to be useful to the human. Is there anything we can do to speed up the response to these questions if the answer is based in part upon answers previously provided? What we can do is when a query (question) is submitted (asked) to a data mining system, we can store the result (answer) as well as information about the result in a cache and then re-use this information to help respond to the next query in a more timely fashion. If a query partially contains a result which was found in the past, we can combine this information with new information to provide the result much faster than if we were to re-run a query incorporating no prior information. This thesis explores this idea by introducing a high performance information cache called a Knowledge Cache with remote access capabilities, as well as a programming model and API for clients to both store, query, share and retrieve knowledge objects from within it. These knowledge objects can then be used in conjunction with a modified data mining algorithm to reduce query processing time for new queries where prior information is useful. We explain the usage model of the Knowledge Cache and API, as well as demonstrate performance gains (open full item for complete abstract)

    Committee: Srinivasan Parthasarathy PhD (Advisor); Gagan Agrawal PhD (Committee Member) Subjects: Computer Science
  • 8. Brown, Marvin The Impact of Data Imputation Methodologies on Knowledge Discovery

    Doctor of Business Administration, Cleveland State University, 2008, Nance College of Business Administration

    The purpose of this research is to investigate the impact of Data Imputation Methodologies that are employed when a specific Data Mining algorithm is utilized within a KDD (Knowledge Discovery in Databases) process. This study will employ certain Knowledge Discovery processes that are widely accepted in both the academic and commercial worlds. Several Knowledge Discovery models will be developed utilizing secondary data containing known correct values. Tests will be conducted on the secondary data both before and after storing data instances with known results and then identifying imprecise data values. One of the integral stages in the accomplishment of successful Knowledge Discovery is the Data Mining phase. The actual Data Mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Neural Networks are the most commonly selected tools for Data Mining classification and prediction. Neural Networks employ various types of Transfer Functions when outputting data. The most commonly employed Transfer Function is the s-Sigmoid Function. Various Knowledge Discovery Models from various research and business disciplines were tested using this framework. However, missing and inconsistent data has been pervasive problems in the history of data analysis since the origin of data collection. Due to advancements in the capacities of data storage and the proliferation of computer software, more historical data is being collected and analyzed today than ever before. The issue of missing data must be addressed, since ignoring this problem can introduce bias into the models being evaluated and lead to inaccurate data mining conclusions. The objective of this research is to address the impact of Missing Data and Data Imputation on the Data Mining phase of Knowledge Discovery when Neural Networks are utilized when employing an s-Sigmoid Transfer function, and are confronted with Missing Data and Data Imp (open full item for complete abstract)

    Committee: Chien-Hua (Mike) Lin Phd (Committee Chair); Adam Fadlalla Phd (Committee Member); Walter Rom Phd (Committee Member); John Kros Phd (Committee Member); Marc Lynn Phd (Advisor) Subjects: Business Education; Computer Science