Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 219)

Mini-Tools

 
 

Search Report

  • 1. Al-Olimat, Hussein Knowledge-Enabled Entity Extraction

    Doctor of Philosophy (PhD), Wright State University, 2019, Computer Science and Engineering PhD

    Information Extraction (IE) techniques are developed to extract entities, relationships, and other detailed information from unstructured text. The majority of the methods in the literature focus on designing supervised machine learning techniques, which are not very practical due to the high cost of obtaining annotations and the difficulty in creating high quality (in terms of reliability and coverage) gold standard. Therefore, semi-supervised and distantly-supervised techniques are getting more traction lately to overcome some of the challenges, such as bootstrapping the learning quickly. This dissertation focuses on information extraction, and in particular entities, i.e., Named Entity Recognition (NER), from multiple domains, including social media and other grammatical texts such as news and medical documents. This work explores the ways for lowering the cost of building NER pipelines with the help of available knowledge without compromising the quality of extraction and simultaneously taking into consideration feasibility and other concerns such as user-experience. I present a type of distantly supervised (dictionary-based), supervised (with reduced cost using entity set expansion and active learning), and minimally-supervised NER approaches. In addition, I discuss the various aspects of the knowledge-enabled NER approaches and how and why they are a better fit for today's real-world NER pipelines in dealing with and partially overcoming the above-mentioned difficulties. I present two dictionary-based NER approaches. The first technique extracts location mentions from text streams, which proved very effective for stream processing with competitive performance in comparison with ten other techniques. The second is a generic NER approach that scales to multiple domains and is minimally supervised with a human-in-the-loop for online feedback. The two techniques augment and filter the dictionaries to compensate for their incompleteness (due to lexical variat (open full item for complete abstract)

    Committee: Krishnaprasad Thirunarayan Ph.D. (Advisor); Keke Chen Ph.D. (Committee Member); Guozhu Dong Ph.D. (Committee Member); Steven Gustafson Ph.D. (Committee Member); Srinivasan Parthasarathy Ph.D. (Committee Member); Valerie L. Shalin Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 2. Dhyani, Dushyanta Boosting Supervised Neural Relation Extraction with Distant Supervision

    Master of Science, The Ohio State University, 2018, Computer Science and Engineering

    Information extraction forms a very large and important component of NLP research which aims at extracting varying nature of information from a text corpus. This information could vary from (named) entities and their inter-relationships in sentences to facts which could later be used for different tasks like search engine retrieval, question answering, etc. Most of these tasks and their associated (primarily) machine learning based solutions ultimately hit a roadblock due to the lack of manually labeled data complemented by an expensive and laborious annotation task. While unsupervised/semi-supervised methods can be developed for these tasks, their effectiveness and usability could be compromised. For the task of relation extraction, the distant supervised paradigm has been shown to have enormous potential in providing a relatively very large amount of training data, at the cost of label noise. Prior efforts have proposed a variety of solutions to reduce the impact of label noise both at an architectural level, as well as by adding a small amount of manual supervision. However, we aim to explore a different relation extraction paradigm - can distant supervision help to improve supervised neural relation extraction? This thesis focuses on exploring various strategies such that a supervised relation extraction model, when supplemented with distant supervision is able to perform better at test time. While we are unable to successfully use approaches based on an attention driven subspace alignment and adversarial training for our goal, a simple distillation based approach can result in an improvement in the model's performance.

    Committee: Huan Sun (Advisor); Alan Ritter (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science; Language; Linguistics
  • 3. Rangaraj, Jithendra Kumar Knowledge-based Data Extraction Workbench for Eclipse

    Master of Science, The Ohio State University, 2012, Computer Science and Engineering

    Data from various sources are in heterogeneous underlying formats. Combining and extracting data from all such sources will require the user to generate mappings between the different formats which vary on a case by case basis. The solution proposed is a knowledge-based approach to solve the problem of data extraction from different source formats: Build ontologies from source files, find mappings between ontologies, use reasoners to extract data, and convert ontologies to desired formats. There are several ontological tools in the market for converting, mapping and reasoning ontologies. Rather than re-inventing the wheel, the solution proposed is to develop a workbench of all the existing tools in the ontological domain and those that may emerge. This is achieved by an extensible Eclipse plugin based architecture called Knowledge-based Data Extraction (KDE). In addition to the architectural aspects of KDE, this thesis makes the following contributions: a taxonomy of terms used in the ontology domain, identification of the capabilities missing in existing ontological tools, and a scenario-based comparison of two ontology mapping algorithms used in the implementation of the architecture.

    Committee: Rajiv Ramnath (Advisor); Jay Ramanathan (Committee Member) Subjects: Computer Engineering; Computer Science
  • 4. Umapathy, Prashanth An Analysis of GPT API for Wrangling Web Scraping Data

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    In my thesis, I investigate three methods to extract product data such as Brand, flavor, strain, units, thc and cbd levels from online cannabis product stores, aiming to find the most effective approach. The process starts with using Python's regex capabilities, a method that's quite precise but needs a lot of specific rules to be set up. This technique involves pulling out product details from websites using patterns, but it can get complicated as each unique piece of data format requires a unique rule. After discussing regex, I introduce the use of the GPT LLM API, an artificial intelligence natural language processing tool that reads and understands product descriptions from raw product website data to extract information automatically. The goal here is to see if this AI can do the job as well or better than the manual methods or the rule-based regex approach. It's a way to potentially streamline the process, reducing the need for so many specific rules. Then, I describe how we also used a manual method, where people collect the data by hand. This serves as a standard to measure the other methods against, providing a benchmark for accuracy and completeness. A significant part of my thesis is dedicated to explaining how I clean and organize the data from these methods, which is crucial for making it usable and reliable. I detail the strengths and limitations of the GPT API in this context, clarifying what it can handle and where it might need help. Furthermore, I thoroughly document all the procedures and rules used in the study. This is important for transparency and allows others to replicate or build on this work. In the end, I present two datasets, one corrected and extracted by humans and the other through the GPT extraction method. As the results, I showcase the different levels of accuracy obtained through these comprehensive approaches. Through this thesis, I shed light on the future of data extraction in specialized fields, for a shift towards more (open full item for complete abstract)

    Committee: Jian Chen (Advisor); Ce Shang (Committee Member) Subjects: Computer Science
  • 5. Li, Haoyu Efficient Visualization for Machine-Learning-Represented Scientific Data

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Recent progress in high-performance computing now allows researchers to run extremely high-resolution computational models, simulating detailed physical phenomena. Yet, efficiently analyzing and visualizing the extensive data from these simulations is challenging. Adopting machine learning models to reduce the storage cost of or extract salient features from large scientific data has proven to be a successful approach to analyzing and visualizing these datasets effectively. Machine learning (ML) models like neural networks and Gaussian process models are powerful tools in data representation. They can capture the internal structures or ``features'' from the dataset, which is useful in compressing the data or exploring the subset of data that is of interest. However, applying machine learning models to scientific data brings new challenges to visualization. Machine learning models are usually computationally expensive. Neural networks are expensive to reconstruct on a dense grid representing a high-resolution scalar field and Gaussian processes are notorious for their cubic time complexity to the number of data points. If we consider other variables in the data modeling, for example, the time dimension and the simulation parameters in the ensemble data, the curse of dimensionality will make the computation cost even higher. The long inference time for the machine learning models puts us in a dilemma between the high data storage cost of the original data representation and the high computation cost of the machine learning representation. The above challenges demonstrate a great need for techniques and algorithms that increase the speed of ML model inference. Despite many generic efforts to increase ML efficiency, for example, using better hardware acceleration or designing more efficient architecture, we tackle a more specific problem of how to query the ML model more efficiently with a specific scientific visualization task. In this dissertation, we c (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Hanqi Guo (Committee Member); Raphael Wenger (Committee Member) Subjects: Computer Engineering; Computer Science
  • 6. Li, Changwei Title-based video summarization using attention networks

    MS, University of Cincinnati, 2022, Engineering and Applied Science: Electrical Engineering

    The rapid advances in video storage, processing and streaming services, improvements of cellular communication speed, enhancement of mobile phone cameras and increase in social media engagement led to explosive growth in the number of videos generated every minute. Therefore, content-based video searching, browsing, and information retrieval technologies have received significant attention in recent years adapting to the massive number of videos generated. Video summarization techniques are among methodologies which can help users browse the video fast and retrieve information more efficiently by either solely extracting key-frames/segments or assembling the important segments further as video skims, highlights or summaries. In this research, the current video summarization pipeline, collected datasets, and related evaluation metrics are reviewed. Furthermore, various video summarization models which rely on the fusion of video title and visual features using attention networks will be proposed and evaluated using publicly available datasets: 1. A baseline video summarization model which uses correlation among visual features of video frames using attention network is studied. The training procedure and evaluation metrics will be compared against similar recent studies. 2. Extracting Video Title embeddings using pre-trained language models, various methodologies for integrating video title information in the baseline model are studied and evaluated. Re-shaping self-attention to cross-attention, a model which takes advantage of correlation among video title and frame visual features is proposed. Given that the correlation of visual frames in long sequences does not necessarily provide video storyline, the fusion of title information in the proposed model improved the video summarization performance as expected. 3. Finally, to further improve the performance of the proposed model, loss function is modified to combine the accuracy of frame-level score pr (open full item for complete abstract)

    Committee: Mehdi Norouzi Ph.D. (Committee Member); Xuefu Zhou Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 7. Hassan, Wael Comparing Geomorphometric Pattern Recognition Methods for Semi-Automated Landform Mapping

    Master of Science (MS), Ohio University, 2020, Geography (Arts and Sciences)

    Landscape regions and hydrological features such as wetlands, rivers, and lakes are frequently mapped and stored digitally as features. Their boundary can be mapped and identified at the physically observable wetland-dryland interface. However, landforms such as mountains, hills, mesas, valleys, which are cognized as component features of or objects attached to the terrestrial surface are not easily delineated due to the lack of clear or unambiguous criteria for defining their boundaries. It is quite challenging to determine where the boundary of the mountain, hill, or valley starts and ends because terrain type, culture, language, and other subjective factors greatly affect how the same portion of the terrestrial surface maybe discretized, classified, labeled, and characterized by people. Cartographers have traditionally used point and line symbols as labels to describe landforms in a map, but this approach ignores the problem of representing the possible physical shape and extension of landforms. This thesis advanced prior work in the fields of geomorphometry and geographic information science to test the viability of existing semi-automated terrain analysis methods for mesoscale landforms that are easily recognized by people because of local topographic and cultural salience. The focus was on finding methods that can help automate the extraction of three broad categories of landforms: non-linear eminences (e.g., peak, mount, pillar, mountain, hill, mesa, butte), linear eminences (e.g., ridge and spur) and linear depressions (e.g., channel, valley, and hollow). Three methods proposed by Wood (1996), Jasiewicz and Stepinski (2013), and Weiss (2001) were selected because they are popular in terrain characterization, have shown promising results for mapping discrete terrain features that are intended to resemble landforms recognized intuitively by people, and because they are easily available for experimentation in freely available software. These methods require onl (open full item for complete abstract)

    Committee: Gaurav Sinha Associate Professor (Committee Chair); Dorothy Sack Professor (Committee Member); Timothy Anderson Associate Professor (Committee Member) Subjects: Geography
  • 8. Gobburi, Ashta Lakshmi Prasad An LC-MS/MS APPROACH FOR GANGLIOSIDES PROFILING IN BRAIN AND RETINAL TISSUE OF MICE: APPLICATION TO GLAUCOMA MICE AGE STUDIES

    Doctor of Philosophy in Clinical-Bioanalytical Chemistry, Cleveland State University, 2017, College of Sciences and Health Professions

    Gangliosides are membrane lipids with a complex sugar head containing sialic acid (polar) and a ceramide moiety made up of a sphingosine and a fatty acid (non-polar). Typically, gangliosides are part of the ganglion, a group of nerve cell bodies where they act as neuroprotective agents, support, and maintenance of the mature neuronal cells. Glaucoma is an age-related neurodegenerative disorder of the eye that leads to blindness. Retinal ganglion cells (RGCs) in retinal tissue of the eye, degeneration is seen followed by the optic nerve head damage. From immunohistochemistry studies with cholera toxin-B, gangliosides which are part of the RGCs were lost during the RGCs degeneration. Non-neuronal cells, astrocytes, which do not express gangliosides in normal retinal tissue surprisingly express/uptake gangliosides in correlation to the RGCs degeneration. Astrocyte expression/uptake of gangliosides can play protective, adverse or combined effects at different times. An identification and quantitation of individual gangliosides that are expressed/uptake provide an insight into understanding the pathophysiology of glaucoma. A specific, sensitive and quantitative liquid-chromatography-mass spectrometry (LC-MS/MS) analytical techniques were developed to identify heterogeneous molecular species of gangliosides. Of all of them, new LC-MS/MS method using a phenyl-hexyl reverse-phase chromatography was successful in resolving eight major gangliosides and ten minor gangliosides. An optimized ganglioside isolation technique was developed which is a combination of liquid-liquid extraction for lipid phase separation followed by a solid-phase extraction for desalting and removal of debris. This new LC-MS/MS method was validated for linearity, repeatability, matrix effect and recovery studies. Age studies (young vs old mice) were performed for the first time to correlate the changes of gangliosides in retinal tissue level in comparison to the superior colliculus of the mid-brain u (open full item for complete abstract)

    Committee: Anderson David (Advisor); Long-Sun Xue (Committee Member); Zhou Aimin (Committee Member); Zhou Xiang (Committee Member); Holland Nolan (Committee Member) Subjects: Analytical Chemistry; Biochemistry; Biology; Chemistry
  • 9. Ramirez Cadavid, David Development of Processes for the Extraction of Industrial Grade Rubber and Co-Products from the Roots of Taraxacum kok-saghyz (TK)

    Doctor of Philosophy, The Ohio State University, 2017, Food, Agricultural and Biological Engineering

    Natural rubber (NR) is a raw material vital to the modern economy produced from a single plant species (Hevea brasiliensis) and the supply of which is insecure. Taraxacum kok-saghyz (TK) is an ideal rubber-producing crop because it synthesizes high quality rubber with similar composition, molecular characteristics, and mechanical properties to hevea NR. Extensive research on the development of TK as a commercially feasible crop is underway. Nonetheless, little research has been done on processing for the recovery of rubber, and byproducts. In this work, a comprehensive compositional analysis of field harvested TK roots was conducted to identify components that may have commercial value. Based on this analysis, new aqueous processes for the extraction of NR from TK roots that feature the use of alkaline pretreatment and enzymatic hydrolysis were investigated. Likewise, these processes were compared to wet milling treatments in order to identify those that result in NR of high purity, yield, and quality. Furthermore, alkaline pretreatment of TK roots at temperatures between 25 and 160°C, and NaOH loadings from 33 to 132 mg NaOH/g TK roots, was further studied to identify its impact on rubber yield, purity, and quality. Additionally, reliable methods to quantify primary TK root components were developed to allow the analysis of the large numbers of samples produced in the development of TK roots. The compositional analysis mass closure was greater than 95%. The roots contained 5.4% rubber and 1.7% resins. 60% of the root was hot water extractable. Inulin (18%) and proteins (10%) were the most abundant water-soluble fractions. Insoluble components included cellulose 9%, hemicellulose 7%, lignin 5%, protein 5% and pectin 3%. Based on the compositional analysis and current TK field plant density data, potential yields of various biobased products were calculated. Results indicated that TK is a potential source of NR and other products of importance such as inulin and prot (open full item for complete abstract)

    Committee: Frederick Carl Michel (Advisor); Katrina Cornish (Advisor); Ajay Shah (Committee Member); Thaddeus Ezeji (Committee Member) Subjects: Agricultural Chemicals; Agricultural Engineering; Chemical Engineering; Engineering; Materials Science; Polymers
  • 10. Menon, Sreejit Design and Development of 2-Functionalized Calix[4]arenes and Their Investigation in the Separation of Lanthanides

    Doctor of Philosophy, University of Toledo, 2016, Chemistry

    The rare earth elements (REE), composed primarily of the fifteen lanthanides, find broad application in many modern devices making them indispensable and important to the sustainable, green world approach prevalent these days. Liquid-liquid extraction for the separation of lanthanides using a calix[4]arene scaffold with coordinating ligands on its lower rim has been studied for many years. New solid phase materials capable of coordinating metals are important to the field of rare earth metal separation and isolation as they will not only minimize solvent wastes but have the potential to be reused, thereby making the separation and recovery of rare earth metals economical. In the first part, the synthesis and characterization of new chlorine terminated 2-alkyl-p-tert-butylcalix[4]arenes with different coordinating ligands such as phosphine oxides, ketones, CMPO, and esters on the lower rim were investigated. Here, the extraction efficiencies of these new calix[4]arenes for rare earth metals in liquid-liquid extraction was investigated in order to confirm that functionalization at the 2-position of the calix[4]arenes does not interfere with the extraction properties of the coordinating ligands on the lower rim. In the second part of this project, further modification of the 2-position was carried out by converting the chlorine terminated end to the highly desirable amino group, which was then utilized in coupling reactions to immobilize these 2-position/tail modified calix[4]arenes bearing lower rim coordinating ligands on solid supports covalently via the attached linker group. These newly synthesized solid phase extractants were characterized using solid state NMR spectroscopy, FT-IR spectroscopy, elemental analysis and EDS and their thermal stability was further determined using TGA. This represents the first report of attachment of calix[4]arene ligands to solid supports via their 2-positions to the best of our knowledge. These new materials were further in (open full item for complete abstract)

    Committee: Joseph Schmidt Dr. (Advisor); Mark Mason Dr. (Committee Member); Steven Sucheck Dr. (Committee Member); Constance Schall Dr. (Committee Member) Subjects: Chemistry
  • 11. Ghanem, Amer Identifying Patterns of Epistemic Organization through Network-Based Analysis of Text Corpora

    PhD, University of Cincinnati, 2015, Engineering and Applied Science: Computer Science and Engineering

    The growth of on-line textual content has exploded in recent years, creating truly massive text corpora. As the quantity of text available on- line increases, professionals from different industries such as marketing and politics are realizing the importance of extracting useful information and insights from this treasure trove of data. It is also clear, however, that doing so requires methods that go beyond those developed for classical data processing or even natural language processing. In particular, there is great need for efficient methods that can make sense of the semantic content of this data, and allows new knowledge to be inferred from it. The research in this dissertation describes a new method for identify- ing latent structures (topics) in texts through the application of community extraction techniques on associative networks of words. Since humans rep- resent knowledge in terms of associations, it is asserted that deriving top- ics from associative networks represents a more cognitively meaningful approach than using purely statistical patterns. The topic identification method proposed in this thesis is called Topic Extraction through Partitioning of Lexical Associative Networks (TExPLAN). It begins by constructing an associative network of words where the strength of their association indicates the frequency of their co-occurrence in documents. Once the word network is constructed, the algorithm proceeds in two stages. In the first stage, a partitioning of the word network takes place using a community extraction method to extract disjoint seed topics. The second stage of TExPLAN uses the connectivity of words across the boundaries of seed topics to assign a relevance measure to each word in each topic, thus generating a set of topics where each one covers all the words in the vocabulary, as is the case with LDA. The topics extracted by TExPLAN are used to define an epistemic metric space in which epistemic entities such as words, texts, do (open full item for complete abstract)

    Committee: Ali Minai Ph.D. (Committee Chair); Raj Bhatnagar Ph.D. (Committee Member); Karen Davis Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member); James Uber Ph.D. (Committee Member) Subjects: Computer Science
  • 12. Mendes, Pablo Adaptive Semantic Annotation of Entity and Concept Mentions in Text

    Doctor of Philosophy (PhD), Wright State University, 2014, Computer Science and Engineering PhD

    The recent years have seen an increase in interest for knowledge repositories that are useful across applications, in contrast to the creation of ad hoc or application-specific databases. These knowledge repositories figure as a central provider of unambiguous identifiers and semantic relationships between entities. As such, these shared entity descriptions serve as a common vocabulary to exchange and organize information in different formats and for different purposes. Therefore, there has been remarkable interest in systems that are able to automatically tag textual documents with identifiers from shared knowledge repositories so that the content in those documents is described in a vocabulary that is unambiguously understood across applications. Tagging textual documents according to these knowledge bases is a challenging task. It involves recognizing the entities and concepts that have been mentioned in a particular passage and attempting to resolve eventual ambiguity of language in order to choose one of many possible meanings for a phrase. There has been substantial work on recognizing and disambiguating entities for specialized applications, or constrained to limited entity types and particular types of text. In the context of shared knowledge bases, since each application has potentially very different needs, systems must have unprecedented breadth and flexibility to ensure their usefulness across applications. Documents may exhibit different language and discourse characteristics, discuss very diverse topics, or require the focus on parts of the knowledge repository that are inherently harder to disambiguate. In practice, for developers looking for a system to support their use case, is often unclear if an existing solution is applicable, leading those developers to trial-and-error and ad hoc usage of multiple systems in an attempt to achieve their objective. In this dissertation, I propose a conceptual model that unifies related techniques in this (open full item for complete abstract)

    Committee: Amit P. Sheth Ph.D. (Advisor); Krishnaprasad Thirunarayan Ph.D. (Committee Member); Shajoun Wang Ph.D. (Committee Member); Sören Auer Ph.D. (Committee Member) Subjects: Computer Science
  • 13. Thomas, Christopher Knowledge Acquisition in a System

    Doctor of Philosophy (PhD), Wright State University, 2012, Computer Science and Engineering PhD

    I present a method for growing the amount of knowledge available on the Web using a hermeneutic method that involves background knowledge, Information Extraction techniques and validation through discourse and use of the extracted information. I present the metaphor of the "Circle of Knowledge on the Web". In this context, knowledge acquisition on the web is seen as analogous to the way scientific disciplines gradually increase the knowledge available in their field. Here, formal models of interest domains are created automatically or manually and then validated by implicit and explicit validation methods before the statements in the created models can be added to larger knowledge repositories, such as the Linked open Data cloud. This knowledge is then available for the next iteration of the knowledge acquisition cycle. I will both give a theoretical underpinning as well as practical methods for the acquisition of knowledge in collaborative systems. I will cover both the Knowledge Engineering angle as well as the Information Extraction angle of this problem. Unlike traditional approaches, however, this dissertation will show how Information Extraction can be incorporated into a mostly Knowledge Engineering based approach as well as how an Information Extraction-based approach can make use of engineered concept repositories. Validation is seen as an integral part of this systemic approach to knowledge acquisition. The centerpiece of the dissertation is a domain model extraction framework that implements the idea of the "Circle of Knowledge" to automatically create semantic models for domains of interest. It splits the involved Information Extraction tasks into that of Domain Definition, in which pertinent concepts are identified and categorized, and that of Domain Description, in which facts are extracted from free text that describe the extracted concepts. I then outline a social computing strategy for information validation in order to create knowledge from the (open full item for complete abstract)

    Committee: Amit Sheth PhD (Advisor); Pankaj Mehra PhD (Committee Member); Shaojun Wang PhD (Committee Member); Pascal Hitzler PhD (Committee Member); Gerhard Weikum PhD (Committee Member) Subjects: Artificial Intelligence; Computer Science; Information Science
  • 14. Brisbin, Judith EXTRACTION TECHNIQUES FOR TRACE ELEMENT DETERMINATIONS OF BIOLOGICAL AND ENVIRONMENTAL SAMPLES INCLUDING ELEMENTAL SPECIATION OF LOBSTER USING INDUCTIVELY COUPLED PLASMA - MASS SPECTROMETRY

    PhD, University of Cincinnati, 2001, Arts and Sciences : Chemistry

    Adequate identification and quantification of elemental species in the environment are necessary to completely assess their potential toxicity. Inductively coupled plasma - mass spectrometry (ICP - MS) coupled to chromatographic separation is a widely used, highly sensitive detector. For samples to be analyzed by ICP - MS, they must first be extracted into a solution capable of being nebulized into the ICP. A variety of extraction procedures were evaluated for the extraction of arsenic and other analytes from lobster tissue samples using ICP - MS detection. Room temperature mixing, sonication, soxhlet, microwave assisted, supercritical carbon dioxide and subcritical water extractions were evaluated for a variety of solvent systems and optimum conditions determined using a partially defatted Lobster Hepatopancreas marine reference material (TORT-2, National Research Council of Canada). Microwave assisted extraction (MAE) yielded comparable or improved recoveries for all of the analytes monitored and proved to be the mildest, fastest, least complicated and most reproducible extraction technique. MAE at 75 °C for 2 minutes exposure time yielded quantitative recovery of arsenic from TORT-2 and lobster tissue samples purchased from a local restaurant A novel gradient anion exchange chromatographic technique was developed that allows the speciation of arsenobetaine (AB), arsenocholine (AC), arsenite (As III), arsenate (As V), monomethylarsonic acid (MMAA) and dimethylarsinic acid (DMAA) in ~ 27 minutes using ammonium carbonate buffer. Low detection limits, excellent long-term stability and baseline resolution of all of the arsenic species evaluated were achieved when the ratio of AC:AB was less than ~ 12.5:50. This technique was successfully applied to TORT-2 and lobster tissue samples. AB was the major arsenic species identified. AC, DMAA, AS V and unknown peaks, possibly arsenosugars, were also found. Methanol and isopropanol were evaluated as extraction solvents for a (open full item for complete abstract)

    Committee: Joseph Caruso (Advisor) Subjects: Chemistry, Analytical
  • 15. BADAOUI, RAOUL APPROACHES FOR PARASITIC-INCLUSIVE SYMBOLIC CIRCUIT REPRESENTATION AND EXTRACTION FOR SYNTHESIS

    PhD, University of Cincinnati, 2005, Engineering : Computer Science and Engineering

    Layout-induced parasitics have significant effects on the behavior of circuits in general and the performance of high-frequency analog ones in particular. To achieve parasite-inclusive performance-closure, layout-aware circuit synthesis methodologies are beginning to emerge. In layout-in-the-loop synthesis methodologies, performance analysis is based on the generation of a concrete layout for the explored circuit sizes. A parasite-inclusive circuit is extracted from the layout using a standard extractor and is analyzed using a simulator to determine whether the required constraints are met. The purpose of layout generation during the synthesis process is solely to determine the layout-induced effects in terms of device and interconnect parasites in the extracted circuit in order to perform accurate, layout-aware performance analysis. If the parasites could be estimated or determined otherwise, there would be no need for layout generation. Various approaches of estimating parasitics lack the correctness that would only come from examining the layout itself. The proposed approach tries to include the exactness of the layout to be generated without actually generating it. It relies on using pre-generated structures for the specified un-sized circuit; these structures are generated before synthesis, they contain the information that a layout would have provided to a synthesis process if it was to be generated. This information contains extraction specifics for modules, location of modules and routing characteristics. Pre-Layout Extraction: The concept of Pre-Layout Extraction shall be used to cover the extraction specific information of modules present in the circuit. It is achieved using a high-level language MSL (Module Specification Language) for the specification of parameterized, topology-specific circuit extractors. Upon compilation, the MSL program yields an executable module which generates the extracted circuit containing parasitics, passive and active devices (open full item for complete abstract)

    Committee: Dr. Ranga Vemuri (Advisor) Subjects:
  • 16. Hazlebeck, David Axial mixing in a liquid-liquid spray type extraction column /

    Master of Science, The Ohio State University, 1961, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 17. Porell, Alcide Solvent extraction studies of scandium (III), yttrium (III) and lanthanum (III) with 2,2,6,6-tetramethyl-3,5-heptanedione /

    Master of Science, The Ohio State University, 1967, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 18. Bowsher, Harlan The effect of column height on the extraction coefficient of a spray tower /

    Master of Science, The Ohio State University, 1950, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 19. Donham, Walter Design and construction of a perforated plate solvent extraction on column /

    Master of Science, The Ohio State University, 1950, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 20. Flaute, Dylan Template-Based Document Information Extraction Using Neural Network Keypoint Filtering

    Master of Science in Electrical Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Documents like invoices, receipts, and forms are essential to many modern business operations. We develop a system for autonomously processing common United States Air Force contract front forms. The system takes in a form and extracts a key-value pair for each box in the form. This task is called key information extraction. In a structured document, the layout is the same from instance to instance (perhaps allowing for rigid transforms). Our documents are semi-structured because, although their layouts are similar, some of the content may be in slightly different places between instances of the form. This makes information extraction harder because the response regions may be in different places from form to form. We demonstrate that, despite the added difficulty, template matching and registration makes for a strong baseline on our semi-structured forms. Additionally, we propose a filtering approach for keypoints based on their position in the layout. Specifically, we use a trained U-Net model to identify intersections and end-points in the form's "wire-frame.'' Then, the pipeline only uses keypoints that are close to those landmarks. We demonstrate that this method improves the registration quality over our baseline, results in a more intuitive distribution of keypoints across the image, and potentially speeds up processing since fewer keypoints need matching.

    Committee: Russell Hardie (Advisor); Barath Narayanan (Committee Member); Vijayan Asari (Committee Member) Subjects: Electrical Engineering