Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 188)

Mini-Tools

 
 

Search Report

  • 1. Marcano, Nashieli La materialidad del texto en los cuentos de Jorge Luis Borges

    Master of Arts (MA), Bowling Green State University, 2006, Spanish

    This thesis is concerned with the material nature of the text in Borges' short stories. The context of this study stems from Borges' gravitation towards the use of textual references, his quest for the “original,” and his view of literature and reading as exercises of re-creation. In each of Borges' works, lies an object that encapsulates the essence of the story, and that fills it with significance; one that establishes a direct contact between reality and fiction. Part I explores the discipline of bibliography, semiotics, hermeneutics and semantics and the role they play in the creation, transmission and interpretation of the text-object. Part II describes the different transformations of the borgesian text-object, from how it is produced (or can produce), to how it establishes an intertextual, psychological and physiological relationship with the subject. Part III analyses the inscription of text in various physical spaces and surfaces such as paper, dreams and memory, fluid substances, architectural spaces and structures, as well as the body. Finally, Part IV studies the translation of the borgesian text-object in the virtual and electronic space.

    Committee: Francisco Cabanillas (Advisor) Subjects:
  • 2. Tan, Yong-Seng An Explanatory Sequential Mixed-Methods Study to Understand Undergraduate Students' Everyday Texting Experiences as they Relate to Social Connectedness

    PHD, Kent State University, 2024, College of Education, Health and Human Services / School of Foundations, Leadership and Administration

    This explanatory sequential mixed-methods study aimed to investigate how undergraduate students' everyday texting experiences related to their social connectedness. This research was conducted through a quantitative survey and qualitative interviews. The quantitative phase involved a survey distributed to a sample of undergraduate students, assessing their texting frequency, demographic variables (age, gender, race/ethnicity, and living arrangements), and levels of social connectedness using the Social Connectedness Scale-Revised (SCS-R; Lee et al., 2001). The subsequent qualitative phase involved semi-structured interviews with a subset of participants selected based on their survey responses to gain deeper insights into their texting behaviors and perceptions of social connectedness. The study's findings indicated that while texting frequency significantly predicted social connectedness among undergraduate students, demographic variables do not show a substantial impact. Additionally, students' perceptions of texting as a tool for maintaining social connections varied, with some viewing it as a convenient means of communication and others finding it insufficient for meaningful interaction. Key motivations for using texting included convenience, immediacy, and the non-intrusive nature of asynchronous communication (Liu et al., 2014; Hall et al., 2018). Texting was generally viewed as less effective than face-to-face interactions for developing deeper social ties (Derks et al., 2008; Uhls et al., 2014). This study contributed to the limited research on digital communication and social connectedness among undergraduate students by highlighting the significant role of texting frequency in predicting social connectedness (Ehrenreich et al., 2019; Harley et al., 2007). It also underscored the need for educational institutions to develop programs that leveraged texting to enhance peer connections while promoting the balanced use of digital and face-to-face interaction (open full item for complete abstract)

    Committee: Enrico Gandolfi (Committee Chair); Richard Ferdig (Committee Co-Chair); Frank Ryan (Committee Member) Subjects: Adult Education; Communication; Curricula; Curriculum Development; Education; Educational Technology; Instructional Design; Mass Media; Mental Health
  • 3. Gandee, Tyler Natural Language Generation: Improving the Accessibility of Causal Modeling Through Applied Deep Learning

    Master of Science, Miami University, 2024, Computer Science

    Causal maps are graphical models that are well-understood in small scales. When created through a participatory modeling process, they become a strong asset in decision making. Furthermore, those who participate in the modeling process may seek to understand the problem from various perspectives. However, as causal maps increase in size, the information they contain becomes clouded, which results in the map being unusable. In this thesis, we transform causal maps into various mediums to improve the usability and accessibility of large causal models; our proposed algorithms can also be applied to small-scale causal maps. In particular, we transform causal maps into meaningful paragraphs using GPT and network traversal algorithms to attain full-coverage of the map. Then, we compare automatic text summarization models with graph reduction algorithms to reduce the amount of text to a more approachable size. Finally, we combine our algorithms into a visual analytics environment to provide details-on-demand for the user by displaying the summarized text, and interacting with summaries to display the detailed text, causal map, and even generate images in an appropriate manner. We hope this research provides more tools for decision-makers and allows modelers to give back to participants the final result of their work.

    Committee: Philippe Giabbanelli (Advisor); Daniela Inclezan (Committee Member); Garrett Goodman (Committee Member) Subjects: Computer Science
  • 4. Har-zvi, Adam A Goat, a Dog, and a Turkey Walk Into a Cantata: Text-Setting in Charles Wuorinen and James Tate's It Happens Like This

    Doctor of Musical Arts (DMA), Bowling Green State University, 2024, Contemporary Music

    Charles Wuorinen's 2010 cantata, It Happens Like This, with texts by James Tate, is a unique work that demonstrates the skill and individual voices of the two artists. Wuorinen synthesizes the jarring, humorous, and eerie prose poems of Tate with his own complex musical style to create a composition that transcends clear classification as being part of the cantata genre. This paper is an analysis of It Happens Like This with a particular focus on the relationship of text and music. Through this analysis I will show the creative methods that Wuorinen uses to reflect and comment on Tate's pieces. Charles Wuorinen's varied text-setting techniques serve to convert Tate's prose poems from individual, short pieces into a full-fledged dramatic work. To outline the significance of these techniques, I will also includes a comparative analysis with a more traditional work: J.S. Bach's St. John Passion. Additionally, I will highlight Tate's contribution to the cantata, and how his personal style shapes It Happens Like This. In this document, I will discuss the text and music independently of one another, before coming together to address their intersection in the musical text-setting.

    Committee: Marilyn Shrude DMA (Advisor); Gregory Decker Ph.D. (Committee Member); Eftychia Papanikolaou Ph.D. (Committee Member); Daniel Maitland Ph.D. (Other) Subjects: Music
  • 5. Bruno, Amy EFFECTS OF A TEXT-MAPPING INTERVENTION ON EXPOSITORY TEXT READING COMPREHENSION OF STUDENTS WITH EMOTIONAL DISTURBANCES

    Doctor of Philosophy, The Ohio State University, 2020, Educational Studies

    The purpose of this study was to examine the effects of a text mapping strategy and student choice of reading passage on the reading comprehension of fourth- and fifth-grade students with Emotional Disturbance. The researcher used an alternating treatments design within a multiple baseline design across participants. Participants were fourth- and fifth-grade students from a large Midwestern urban school district, who were identified with an Emotional Disturbance and who received at least 50% of their education in a special education classroom setting. Three participants' reading comprehension of expository texts were measured through oral retell of passages and researcher-generated quizzes. Visual analysis was used to examine the effects of text mapping with prompt fading. This study supports previous research indicating that text mapping is an effective strategy for improving the reading comprehension of students with disabilities, when presented in combination with a choice in reading passage.

    Committee: Laurice Joseph Ph.D (Advisor); Sheila Alber-Morgan Ph.D. (Committee Member); Moira Konrad Ph.D. (Committee Member); Adrian Rodgers Ph.D (Other) Subjects: Behavioral Psychology; Behavioral Sciences; Education; Educational Psychology; Psychology; Reading Instruction
  • 6. Aryasomayajula, Naga Srinivasa Baradwaj Machine Learning Models for Categorizing Privacy Policy Text

    MS, University of Cincinnati, 2018, Engineering and Applied Science: Computer Science

    A privacy policy is a legal document that discloses the privacy practices of a company to its customers and contains information on how the company collects, uses and manages their data. The privacy policies of many companies on the web are written in natural language. The vocabulary employed in these documents is often sophisticated, and the policy documents themselves are lengthy. This complex nature of privacy policy documents leads end users to skip reading them or not perceive vital information, thus resulting in users not making informed decisions whether to allow the company to collect their personal information. There is a need to address this issue by making privacy policies more user-friendly. In order to address these issues, this thesis makes use of a privacy policy corpus called OPP-115, which contains 115 privacy policies annotated with different data practices. In this thesis, privacy policy text from First Party Collection/Use category of OPP-115 corpus is used for the analysis. The methods used here are a combination of linguistic and machine learning techniques applied to the corpus. A set of features which include noun phrases, verb phrases, and the relative positions of text are derived in this thesis, after observing the behavior of the text fragments in the corpus. These features are used in various supervised learning algorithms. Using the bag of words on the text as a base model, the performance of these algorithms with the extracted features is compared using various statistical measures. It is observed that the supervised learning methods with the features extracted in this thesis outperform the baseline methods.

    Committee: Shomir Wilson Ph.D. (Committee Chair); Gowtham Atluri Ph.D. (Committee Member); Raj Bhatnagar Ph.D. (Committee Member) Subjects: Computer Science
  • 7. Ali, Ismael Using and Improving Computational Cognitive Models for Graph-Based Semantic Learning and Representation from Unstructured Text with Applications

    PHD, Kent State University, 2018, College of Arts and Sciences / Department of Computer Science

    In the era of data-driven industry, the unstructured text, which is generated by human cognition skills, remains the main data format with a massive amount being generated from different sources of technology. The problem we are handling in our work is: How can machine more efficiently learn, represent, and grow semantics from unstructured text, as the written form of natural language? We propose a cognitive model, ICAN-2, inspired by human cognition skills, to learn/extract and represent semantics from text. ICAN-2 is an improved version of the ICAN cognitive model of semantic memory. The ICAN model is from Incremental Construction of an Associative Network model, and it aims at computationally modeling the development of semantic associations in the human semantic memory. Both the ICAN and ICAN-2 models use semantic-graphs to represent semantics. The traditional and yet widely used text representation model is the Vector Space Model (VSM), also known as Bag-Of-Words (BOW) model, in which documents are represented simply by n dimensional feature-vectors. The most widely used term weighting scheme in VSM model is called Term-Frequency/Inverse-Document-Frequency (TF/IDF), which is also used in the latent semantic analysis (LSA) model of semantic learning and representation. Both VSM-based approaches of the TF/IDF and LSA have some notable limitations such as neglecting the word order and other dependency relations among the terms appearing in the original text documents. The ICAN-2 model is an alternative cognitive-graph based model for the traditional VSM model of text representation. After a detailed survey of related works, the performance of the ICAN-2 model is compared against the two most closely related models of semantic modeling in the literature: (1) the LSA model as a cognitive model that has been applied in different text-mining tasks and is an alternative for the TF/IDF technique and (2) the ICAN model which is the seed model for our work and tec (open full item for complete abstract)

    Committee: Austin Melton Jr (Advisor) Subjects: Computer Science
  • 8. Ren, Zhaowei Analysis and Modeling of the Structure of Semantic Dynamics in Texts

    MS, University of Cincinnati, 2017, Engineering and Applied Science: Electrical Engineering

    The analysis of texts has recently become a very active and important area of research because of the exponential growth in electronic documents and the need for their automatic analysis. In particular, it is important to extract meaning from documents for purposes such as classification, interpretation, summarization, etc. The standard approach has been to identify keywords and extract topics. The most widely used topic extraction methods such as latent Dirichlet allocation (LDA) use a bag-of-words approach, where each word in the document is assumed to be chosen independently. Alternatively, there are Markov models – often based on n-grams – that look at the transition probabilities between words or groups of words in an attempt to extract temporal semantic patterns. However, the structure of thought is more complicated than either of these two models assumes. The research presented in this thesis is based on a framework that models thinking as a hierarchical itinerant dynamical process. In this view, a document (or speech) is a sequence of variable length semantic blocks, each representing a single coherent thought, with transitions between blocks and intervening gaps of low semantic content. Importantly, the model looks at semantic coherence at the word level, sentence level and block level. Analyzing this semantic structure for individual documents and large corpora of documents is useful for several reasons: 1) It helps identify general patterns and parameters of human thinking; 2) It allows the writing style of documents to be characterized; and 3) It potentially provides an automated way to uncover the deeper ideas underlying the document, including the structure of its argument. This thesis focuses on the first aspect. Using several corpora of research papers from the International Joint Conference on Neural Networks over multiple years, it looks at the characteristics of semantic blocks and their transition statistics, building towards a hierarchical Mar (open full item for complete abstract)

    Committee: Ali Minai Ph.D. (Committee Chair); Raj Bhatnagar Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 9. SUI, ZHENHUAN Hierarchical Text Topic Modeling with Applications in Social Media-Enabled Cyber Maintenance Decision Analysis and Quality Hypothesis Generation

    Doctor of Philosophy, The Ohio State University, 2017, Industrial and Systems Engineering

    Many decision problems are set in changing environments. For example, determining the optimal investment in cyber maintenance depends on whether there is evidence of an unusual vulnerability such as “Heartbleed” that is causing an especially high rate of incidents. This gives rise to the need for timely information to update decision models so that the optimal policies can be generated for each decision period. Social media provides a streaming source of relevant information, but that information needs to be efficiently transformed into numbers to enable the needed updates. This dissertation first explores the use of social media as an observation source for timely decision-making. To efficiently generate the observations for Bayesian updates, the dissertation proposes a novel computational method to fit an existing clustering model, called K-means Latent Dirichlet Allocation (KLDA). The method is illustrated using a cyber security problem related to changing maintenance policies during periods of elevated risk. Also, the dissertation studies four text corpora with 100 replications and show that KLDA is associated with significantly reduced computational times and more consistent model accuracy compared with collapsed Gibbs sampling. Because social media is becoming more popular, researchers have begun applying text analytics models and tools to extract information from these social media platforms. Many of the text analytics models are based on Latent Dirichlet Allocation (LDA). But these models are often poor estimators of topic proportions for emerging topics. Therefore, the second part of dissertation proposes a visual summarizing technique based on topic models, a point system, and Twitter feeds to support passive summarizing and sensemaking. The associated “importance score” point system is intended to mitigate the weakness of topic models. The proposed method is called TWitter Importance Score Topic (TWIST) summarizing method. TWIST employs the topic propor (open full item for complete abstract)

    Committee: Theodore Allen (Advisor); Steven MacEachern (Committee Member); Cathy Xia (Committee Member); Nena Couch (Other) Subjects: Finance; Industrial Engineering; Operations Research; Statistics; Systems Science
  • 10. Cochran, Pamela Q Code, Text, and Signs: A Study of the Social Semiotic Significance of QSL Cards

    Master of Arts in Rhetoric and Writing​, University of Findlay, 2016, English

    Applying rhetorical theories to non-traditional forms of text is important and necessary in the world of composition to further understand and research the many different types of rhetoric. QSL postcards from the 1920s and `30s are a perfect example of a sub-genre of composition worthy of study because of the semiotics, discourse, and community that are represented on them in text and in image. This research, consisting of written text and a web project, will focus on a set of QSL cards collected by one ham radio operator during the 1920s and `30s and will demonstrate the importance of rhetorical and semiotic study of multi-modal text.

    Committee: Courtney Bates Dr. (Committee Chair); Christine Denecker Dr. (Committee Member); Erin Laverick Dr. (Committee Member); Christine Tulley Dr. (Advisor) Subjects: Composition; Cultural Anthropology; Mass Communications; Multimedia Communications; Rhetoric; Technical Communication
  • 11. Johnson, Eamon Methods in Text Mining for Diagnostic Radiology

    Doctor of Philosophy, Case Western Reserve University, 2016, EECS - Computer and Information Sciences

    Information extraction from clinical medical text is a challenge in computing to bring structure to the prose produced for communication in medical practice. In diagnostic radiology, prose reports are the primary means for communication of image interpretation to patients and other physicians, yet secondary use of the report requires either costly review by another radiologist or machine interpretation. In this work, we present mechanisms for improving machine interpretation of domain-specific text with large scale semantic analysis, using a corpus of 726,000 real-world radiology reports as a basis for experimentation. We examine the abstract conceptual problem of detection of incidental findings (uncertain or unexpected results) in imaging study reports. We demonstrate that classifiers incorporating semantic metrics can outperform F-measure of prior methods for follow-up classification and also outperform F-measure of incidental findings classification by physicians in-clinic (0.689 versus 0.648). Further, we propose two semantic metrics, focus and divergence, as calculated over the SNOMED-CT ontology graph, for summarization and projection of discrete report concepts into 2-dimensional space which enables both machine classification and physician interpretation of classifications. With understanding of the utility of semantic metrics for classification, we present methods for enhancing extraction of semantic information from clinical corpora. First, we construct a zero-knowledge method for imputation of semantic class for unlabeled terms through maximization of a confidence factor computed using pairwise co-occurrence statistics and rules limiting recall. Experiments with our method on corpora of reduced Mandelbrot information temperature produce accurate labeling of up to 25% of terms not labeled by prior methods. Second, we propose a method for context-sensitive quantification of relative concept salience and an algorithm capable of increasing both salienc (open full item for complete abstract)

    Committee: Gultekin Ozsoyoglu (Committee Chair); Marc Buchner (Committee Member); Adam Perzynski (Committee Member); Andy Podgurski (Committee Member) Subjects: Computer Science
  • 12. Wu, Yingyu Using Text based Visualization in Data Analysis

    MS, Kent State University, 2014, College of Arts and Sciences / Department of Computer Science

    In this thesis, we use text based visualizations for the analysis of a variety of big datasets. First, a large amount of taxi trajectories are converted into a set of documents of street names. The thematic topics are discovered to find hidden city traffic patterns. Parallel coordinates are used to visualize the taxi trajectories over these topics to find clusters and outliers. Second, high dimensional categorical datasets are processed as text documents and subspace clusters are found. Optimal rule mining is used to discover rules describing multivariate relationships. Then, we use word tree and word cloud to visualize such clusters and rules. Third, we visualize real time text stream with an effective visual analysis system. Multilevel and focus+context techniques are used in helping users visualize and understanding evolving text datasets.

    Committee: Zhao Ye (Advisor); Austin Melton (Committee Member); Cheng Chang Lu (Committee Member) Subjects: Computer Science
  • 13. Ramakrishnan, Cartic Extracting, Representing and Mining Semantic Metadata from Text: Facilitating Knowledge Discovery in Biomedicine

    Doctor of Philosophy (PhD), Wright State University, 2008, Computer Science and Engineering PhD

    The information access paradigm offered by most contemporary text information systems is a search-and-sift paradigm where users have to manually glean and aggregate relevant information from the large number of documents that are typically returned in response to keyword queries. Expecting the users to glean and aggregate information has lead to several inadequacies in these information systems. Owing to the size of many text databases, search-and-sift is a very tedious often requiring repeated keyword searches refining or generalizing queries terms. A more serious limitation arises from the lack of automated mechanisms to aggregate content across different documents to discover new knowledge. This dissertation focuses on processing text to assign semantic interpretations to its content (extracting Semantic metadata) and the design of algorithms and heuristics to utilize the extracted semantic metadata to support knowledge discovery operations over text content. Contributions in extracting semantic metadata in this dissertation cover the extraction of compound entities and complex relationships connecting entities. Extraction results are represented using a standard Semantic Web representation language (RDF) and are manually evaluated for accuracy. Knowledge discovery algorithms presented herein operate on RDF data. To further improve access mechanisms to text content, applications supporting semantic browsing and semantic search of text are presented.

    Committee: Amit Sheth PhD (Advisor); Michael Raymer PhD (Committee Member); Shaojun Wang PhD (Committee Member); Guozhu Dong PhD (Committee Member); Thaddeaus Tarpey PhD (Committee Member); Vasant Honavar PhD (Committee Member) Subjects: Computer Science
  • 14. Wallot, Sebastian The role of reading fluency, text difficulty and prior knowledge in complex reading tasks

    PhD, University of Cincinnati, 2011, Arts and Sciences: Psychology

    Despite the fact that reading is one of the most frequently studied topics in psychology, cognitive science, and educational science, research on text reading has made only little progress in the past 100 years. This is true for the more basic research on reading as a cognitive activity, where materials that resemble natural texts are usually neglected in favor of tasks that employ only individual words or sentences. It is just as true for applied research, where studies either follow the lead of the ‘few-words-approach' or focus exclusively on the outcome of reading performance – such as memory or comprehension of a read passage. In this latter case, the reading activity itself that brings about a certain memory or comprehension result remains in a black box. The work presented in this dissertation tries to bridge this gap. In particular, the aim is to expand the work of Wallot and Van Orden (2011a, b) on the application of complexity metrics to evaluate the process of reading in complex text reading tasks. Complexity metrics, which quantify the degree of stability, complexity, and interconnectedness of performance, are employed to expand the findings on reading fluency in self-paced reading by Wallot and Van Orden (2011a, b) to the level of eye-movements during reading. While reading performance during self-paced reading is estimated by the intervals between key-presses that the reader employs to reveal each new word or sentence of a text, eye-movements are considered to be a more fine-grained measure of the reading process (Rayner, 1978). Hence, study 1 of this dissertation is an attempt to replicate the findings of Wallot and Van Orden (2011a, b), where the time evolutions of the performance of more fluent readers showed higher stability and greater commonality compared to less fluent readers, and the stability of and commonality between less fluent readers' performances increased with repeated reading of the same text. Study 2 gathers first evidence for what (open full item for complete abstract)

    Committee: Guy Van Orden PhD (Committee Chair); John Holden PhD (Committee Member); Beth O'Brien PhD (Committee Member); Michael Richardson PhD (Committee Member) Subjects: Psychology
  • 15. Wilson, Christin Variation and Text Type in Old Occitan Texts

    Doctor of Philosophy, The Ohio State University, 2012, Linguistics

    Although there is a fairly large corpus of Old Occitan texts, the majority of the linguistic analysis on the language has been done using only one type of text: the lyric poetry of the troubadours, though the lyric poetry accounts for only about ten percent of the total Old Occitan corpus. Our understanding of the language and its development is thus less complete and accurate than it could be if all of the types of texts that constitute the full corpus were considered more fully. This study seeks to bridge that gap by considering whether analyzing the understudied prose and non-lyric poetry texts uncovers the same variants and patterns as the lyric poetry, or to what extent these vary between text types. The publication of the Concordance de l'Occitan Medieval (COM), which includes the entirety of the Old Occitan corpus for the first time, allows the prose and non-lyric poetry texts to be searched and analyzed digitally. Very little quantitative work has been done concerning the patterns of variation within the Old Occitan texts, but the creation of the COM makes such studies possible. Using this corpus and taking previous research on the language as a starting point (e.g. Jensen 1976, Anglade 1977), this study compares the attestations and patterns of the use of phonological and morphological features between the three major types of text in Old Occitan: the lyric poetry texts, the non-lyric poetry texts, and the prose texts. By considering these features both quantitatively and qualitatively, I seek to further understand the relationship between variation and text type, particularly in reference to the representation of sound change in progress. Three aspects of the Old Occitan language are investigated: the use of analytic and synthetic comparative adjective forms, the formation of adjectives using various derivational suffixes, and the development of the glide-initial diphthongs. My findings show that the text type plays an important role in the patterns of va (open full item for complete abstract)

    Committee: Brian Joseph (Advisor); Daniel Collins (Committee Member); Hope Dawson (Committee Member); Dieter Wanner (Committee Member) Subjects: Linguistics
  • 16. Xiong, Hui Combining Subject Expert Experimental Data with Standard Data in Bayesian Mixture Modeling

    Doctor of Philosophy, The Ohio State University, 2011, Industrial and Systems Engineering

    Engineers face many quality-related datasets containing free-style text or images. For example, a database could include summaries of complaints filed by customers, or descriptions of the causes of rework or maintenance or of the associated actions taken, or a collection of quality inspection images of welded tubes. The goal of this dissertation is to enable engineers to input a database of free-style text or image data and then obtain a set of clusters or “topics” with intuitive definitions and information about the degree of commonality that together helps prioritize system improvement. The proposed methods generate Pareto charts of ranked clusters or topics with their interpretability improved by input from the analyst or method user. The combination of subject matter expert data with standard data is the novel feature of the methods considered. Prior to the methods proposed here, analysts applied Bayesian mixture models and had limited recourse if the cluster or topic definitions failed to be interpretable or are at odds with the knowledge of subject matter experts. The associated “Subject Matter Expert Refined Topic” (SMERT) model permits on-going knowledge elicitation and high-level human expert data integration to address the issues regarding: (1) unsupervised topic models often produce results to user, and (2) to provide a “Hierachical Analysis Designed Latency Experiment” (HANDLE) for human expert to interact with the model results. If grouping are missing key elements, so-called “boosting” these elements is possible. If certain members of a cluster are nonsensical or nonphysical, so-called “zapping” these nonsensical elements is possible. We also describe a fast Collapsed Gibbs Sampling (CGS) algorithm for SMERT method, which offers the capacity to efficiently SMERT model large datasets but which is associated with approximations in certain cases. We use three case studies to illustrate the proposed methods. The first relates to scrap text reports for a Ch (open full item for complete abstract)

    Committee: Theodore Allen PhD (Advisor); Suvrajeet Sen PhD (Committee Member); David Woods PhD (Committee Member) Subjects: Computer Science; Engineering; Industrial Engineering; Information Technology
  • 17. Sriram, Bharath Short Text Classification in Twitter to Improve Information Filtering

    Master of Science, The Ohio State University, 2010, Computer Science and Engineering

    In micro-blogging services such as Twitter, the users may get overwhelmed by the raw data. One solution to this problem is the classification of Twitter messages (tweets). As short texts like tweets do not provide sufficient word occurrences, classification methods that use traditional approaches such as “Bag-Of-Words” have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages. Existing works on classification of short text messages integrate every message with meta-information from external information sources such as Wikipedia and WordNet. Automatic text classification and hidden topic extraction approaches perform well when there is meta-information or the context of the short text is extended with knowledge extracted using large collections. But these approaches require online querying which is time consuming and unfit for real time applications. When external features from the world knowledge is used to enhance the feature set, complex algorithms are required to carefully prune overzealous features. These approaches eliminate the problem of data sparseness but create a new problem of the curse of dimensionality [38]. Hence efficient ways are required to improve the accuracy of classification by using minimal set of features to represent the short text. We propose an intuitive approach to determine the class labels and the set of features with a focus on user intentions on Twitter such as daily chatter, conversations, sharing information/URLs. We classify incoming tweets into five generic categories – news, opinions, deals, events and private messages. We believe that these categories are diverse and cover most of the topics that people usually tweet about. Experimental results using our proposed technique outperform the ba (open full item for complete abstract)

    Committee: Hakan Ferhatosmanoglu PhD (Advisor); Gagan Agrawal PhD (Committee Member) Subjects: Computer Science
  • 18. Beach, Shannon “PUTTING OURSELVES IN THEIR SHOES”: CASE STUDIES OF FOUR TEENAGERS' READING EXPERIENCES WITH NONFICTION LITERATURE IN A SOCIAL STUDIES CLASSROOM

    PHD, Kent State University, 2012, College of Education, Health and Human Services / School of Teaching, Learning and Curriculum Studies

    The purpose of this qualitative study was to examine student experiences reading nonfiction literature in a social studies classroom. It examined the transactions that took place between the readers and the nonfiction literature in a social studies setting at Fullerton High School, a suburban school in Northeast, Ohio. Reader responses, including journal entries, online postings, and poems, classroom observations, and interviews informed the findings. The study found the students' backgrounds and experiences influenced their reading experiences with the nonfiction literature in their social studies class, and as a result, the students created personal transactions or poems with the text. While each student transacted with the text differently, they all experienced self poems, which were transactions directly related to the reader, text poems, which were transactions directly related to other forms of media, and world poems, which were transactions related to the world. As a result of these poems, students experienced other cultures unlike their own. Due to this cultural experience, the students gained an appreciation and questioned their own culture while demonstrating empathy towards others. The implications for research and instruction highlight the use of nonfiction literature in high school social studies classes to create self, text, and world poems. Through the creation of poems, students address some of the themes, such as individual development and awareness of cultures, put forth by the National Council for Social Studies (NCSS), the need to provide opportunities for students to demonstrate empathy, and the recommendation for teachers getting to know their students. The study also informed my reading practices as an English teacher.

    Committee: Denise N. Morgan PhD (Committee Chair); Alicia R. Crowe PhD (Committee Member); Tricia Niesz PhD (Committee Member) Subjects: Education
  • 19. Lister, Kelly Compulsive Text Messaging: Do Youth Need to Kick the Habit?

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2010, Psychology/Clinical

    In the last decade, electronically-mediated communication (EMC) has increased dramatically as a format for social interaction, particularly among adolescents. Despite this increase, little research has focused on the behaviors occurring in EMC, most notably through text messaging. The purpose of this study was to address questions regarding the frequency and compulsivity of adolescents' texting, its relation to adjustment, and moderators of the relation between compulsivity of texting and adjustment. Participants were 211 8th graders who completed a survey about their texting behaviors. Most adolescents (80%) reported sending text messages between a few days a week and every day, with 23% reporting that they send and receive over 100 text messages each day. Females reported greater compulsivity of texting than males. Frequency of texting was related to compulsivity of texting, although only 9% of students reported compulsively texting at the rate of sometimes or more. Compulsive texting was positively related to aggression and negatively related to academic adjustment. Self-control, including effortful control and conscientiousness, moderated the relation between compulsive texting and internalizing problems, academic adjustment, and prosocial behavior. There was a protective effect for high self-control, such that compulsive texters with high self-control showed more positive adjustment than those with less self-control. Limitations of this survey study included limited generalizability of results due to the age and ethnic distributions of the sample and the lack of longitudinal data, which precludes conclusions about temporal directions of effects. Finally, ideas for future studies and important implications of this study, such as parental supervision of texting and school-related responses to texting, were discussed.

    Committee: Eric Dubow PhD (Advisor); Jean Gerard PhD (Committee Member); Montana Miller PhD (Committee Member); Dara Musher-Eizenman PhD (Committee Member) Subjects: Communication; Education; Psychology
  • 20. Domoff, Sarah The Role of Texting Motivations in Moderating the Relation between Compulsive Texting and Adolescents' Adjustment

    Master of Arts (MA), Bowling Green State University, 2010, Psychology/Clinical

    Over the past several years, text messaging has dramatically increased among youth, but there is limited research on how compulsive texting relates to children's adjustment. The purpose of the current study was to examine the relation of compulsive texting to adjustment indices drawn from literature on internet addiction (e.g., academic adjustment, aggression, internalizing symptoms) and, based on the Uses and Gratifications Model, examine how motivations for texting (e.g., efficiency, escapism, intimacy, perceived control over social interactions) moderated the relation between compulsive texting and the adjustment variables. For exploratory purposes, sex was examined as an additional moderator. Survey data were collected from 403 students in the 8th and 11th grades in a rural Midwestern community. Correlational analyses supported the hypothesis that higher levels of compulsive texting would relate to higher levels of aggression and internalizing symptoms and lower levels of academic adjustment. Sex differences were found on the relation between compulsive texting and internalizing symptoms and on the relation between compulsive texting and academic adjustment. For these relations, significant correlations were found only for females. Hierarchical multiple regression analyses supported the hypotheses that the escapism motivation for texting and the perceived control over social interactions motivation for texting moderated the relation between compulsive texting and internalizing symptoms, for both males and females. These motivations strengthened the relation between compulsive texting and internalizing symptoms, suggesting that they could serve as vulnerability factors. Although the study is correlational and causality cannot be determined, the findings have implications for theory and practice. For example, it could be that deficits in coping skills (escapism motivation) or discomfort in face-to-face social interactions (perceived control over social interaction (open full item for complete abstract)

    Committee: Eric Dubow PhD (Committee Chair); Carolyn Tompsett PhD (Committee Member); Anne Gordon PhD (Committee Member) Subjects: Psychology