Search Results (1 - 25 of 135 Results)

Sort By  
Sort Dir
 
Results per page  

Imbulgoda Liyangahawatte, Gihan Janith MendisHardware Implementation and Applications of Deep Belief Networks
Master of Science in Engineering, University of Akron, 2016, Electrical Engineering
Deep learning is a subset of machine learning that contributes widely to the contemporary success of artificial intelligence. The essential idea of deep learning is to process complex data by abstracting hierarchical features via deep neural network structure. As one type of deep learning technique, deep belief network (DBN) has been widely used in various application fields. This thesis proposes an approximation based hardware realization of DBNs that requires low hardware complexity. This thesis also explores a set of novel applications of the DBN-based classifier that will benefit from a fast implementation of DBN. In my work, I have explored the application of DBN in the fields of automatic modulation classification method for cognitive radio, Doppler radar sensor for detection and classification of micro unmanned aerial systems, cyber security applications to detect false data injection (FDI) attacks and localize flooding attacks, and applications in social networking for prediction of link properties. The work in this thesis paves the way for further investigation and realization of deep learning techniques to address critical issues in various novel application fields.

Committee:

Jin Wei (Advisor); Arjuna Madanayaka (Committee Co-Chair); Subramaniya Hariharan (Committee Member)

Subjects:

Artificial Intelligence; Computer Engineering; Electrical Engineering; Engineering; Experiments; Information Technology

Keywords:

deep belief networks; multiplierless digital architecture; Xilinx FPGA implementations; low-complexity; applications of deep belief networks; spectral correlation function; modulation classification; drone detection; doppler radar; cyber security

Shoop, Jessica ASENIOR INFORMATION TECHNOLOGY (IT) LEADER CREDIBILITY: KNOWLEDGE SCALE, MEDIATING KNOWLEDGE MECHANISMS, AND EFFECTIVENESS
Doctor of Philosophy, Case Western Reserve University, 2017, Management
This dissertation explains leader effectiveness in the context of the senior information technology (IT) leader who plays a pivotal role in the execution and delivery of corporate IT services. Considered leaders of leaders, senior IT leaders typically report to the Chief Information Officer (CIO). Using a sequential three-phase mixed methods study the thesis makes four contributions; (1) through qualitative inquiry shows that effective senior IT leaders maintain a balance of domain knowledge and emotional and social aptitudes; (2) develops and validates a four-dimensional scale to measure the level of IT leader domain knowledge; (3) demonstrates nomological and predictive validity of the scale and evaluates the impact of IT leader domain knowledge in solving managerial problems and brokering knowledge to others; (4) the studies combine to a build cohesive argument that leadership credibility wherein technical domain knowledge forms the other component is a critical antecedent for leadership effectiveness. The validation is founded on a sample of 104 senior IT leaders and 490 IT leader subordinates within a global IT service firm. Overall, our findings suggest that the so far neglected effect of IT domain knowledge forms not only an important but vital component influencing overall senior IT leader effectiveness. This has consequences for both established theories of leader credibility and leader effectiveness in highly specialized technical domains. Practically the study underscores the importance of hiring and maintaining senior IT leaders with strong technical credentials.

Committee:

Kalle Lyytinen, Ph.D. (Committee Chair); Jagip Singh, Ph.D. (Committee Member); Genevieve Bassellier, Ph.D. (Committee Member); John King, Ph.D. (Committee Member)

Subjects:

Business Administration; Information Systems; Information Technology; Management

Keywords:

Senior IT Leaders; Leadership Effectiveness; Credibility; Domain Knowledge; Leader Knowledge; Knowledge Mechanisms; Scale Development; Multi-dimensional Scale Validity; Mixed Methods

Roy, EnakshiSocial Media, Censorship and Securitization in the United States and India
Doctor of Philosophy (PhD), Ohio University, 2017, Journalism (Communication)
Using the theoretical perspectives of Spiral of Silence and Securitization, this dissertation examines (1) how censorship practices such as content removal were employed by the United States and the Indian governments to securitize the internet and social media, and (2) whether such practices contribute to an online spiral of silence. To explore these aspects, this study used a mixed-method approach with in-depth interviews and surveys. Seven interviews with authors of Transparency Reports and legal experts provided information about the U.S. and Indian government-initiated content removal process from Google Web Search, Blogger, YouTube, Facebook and Twitter between 2010 and 2015. Surveys with 587 respondents from the United States and India explored self-censorship on Facebook and Twitter, on issues related to national security and government criticism. The findings indicate that in the United States, “defamation” is the frequently cited yet an often-misused reason for content removal, while in India “religious offense” and “defamation” are prominent reasons for content takedowns. On several occasions, protected speech was removed from the internet and social media in both countries. Such acts of state-level censorship, in turn impacts self-censoring on controversial issues by individuals on social media. The implications here are that using the law to criminalize dissent increases self-censorship and this is counter-productive to democratic discourse.

Committee:

Yusuf Kalyango, Jr., Ph.D. (Committee Chair); Aimee Edmondson, Ph.D. (Committee Member); Eve Ng, Ph.D. (Committee Member); Nukhet Sandal, Ph.D. (Committee Member)

Subjects:

Communication; Information Technology; International Law; Journalism; Legal Studies; Mass Communications; Mass Media; Technology

Keywords:

Transparency Report; Internet censorship; Internet censorship USA, India; Internet Securitization; Spiral of Silence public opinion; public opinion social media; social media censorship; content removal; Google, Facebook, Twitter transparency reporting

Dhar, SamirAddressing Challenges with Big Data for Maritime Navigation: AIS Data within the Great Lakes System
Doctor of Philosophy, University of Toledo, 2016, Spatially Integrated Social Science
The study presented here deals with commercial vessel tracking in the Great Lakes using the Automatic Identification System (AIS). Specific objectives within this study include development of methods for data acquisition, data reduction, storage and management, and reporting of vessel activity within the Great Lakes using AIS. These data show considerable promise in tracking commodity flows through the system as well as documenting traffic volumes at key locations requiring infrastructure investment (particularly dredging). Other applications include detecting vessel calls at specific terminals, locks and other navigation points of interest. This study will document the techniques developed to acquire, reduce, aggregate and store AIS data at The University of Toledo. Specific topics of the paper include: data reducing techniques to reduce data volumes, vessel path tracking, estimate speed on waterway network, detection of vessel calls made at a dock, and a data analysis and mining for errors within AIS data. The study also revealed the importance of AIS technology in maritime safety, but the data is coupled with errors and inaccuracy. These errors within the AIS data will have to be addressed and rectified in future to make the data accurate and useful. The data reduction algorithm shows a 98% reduction in AIS data making it more manageable. In future similar data reduction techniques can possibly be used with traffic GPS data collected for highways and railways.

Committee:

Peter Lindquist (Committee Chair); Kevin Czajkowski (Committee Member); Neil Reid (Committee Member); Mark Vonderembse (Committee Member); Richard Stewart (Committee Member)

Subjects:

Geographic Information Science; Geography; Information Technology; Remote Sensing; Social Research; Transportation

Keywords:

Automatic Identification System , AIS, Big Data, Data Reduction Technique, Vessel Path, Vessel Call, Great Lakes, Maritime, VTS

Yang, LiuEffect of product review interactivity, social inequality, and culture on trust in online retailers: A comparison between China and the U.S.
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2017, Media and Communication
This study is the first study that compared the predicting strength of the effect of the micro factor (interactivity of product review use experiences) and macro factors (social inequality and culture) on consumers’ trust in online retailers. It examines the predictor of online trust by information asymmetry theory, reciprocity, in-group favoritism and out-group derogation, and social presence. Consumers of the two largest e-commerce sites in the United States and China, Amazon and Tmall, are compared. The results show the interactivity of product use experience is the strongest predictor of consumers’ trust in online retailers compared to social inequality and culture. The interactivity is positively related to consumers’ trust in famous brands, third-party retailers, and fulfilled third-party retailers of both Amazon and Tmall. In contrast, social inequality is negatively related to consumers’ trust in famous brands, third-party retailers, and fulfilled third-party retailers of both Amazon and Tmall. Individualism is positively related to trust in third-party retailers while collectivism is positively related to trust in third-party retailers fulfilled by Amazon or Tmall. Power distance exerts a positive impact on trust in famous brands only. Collectivism plays a more critical role in predicting trust in fulfilled online retailers in Chinese sample than in the U.S. sample. The relationship of trust in online retailers and consumers’ actual online purchases is different across countries. Trust in online retailers is an important direct predictor of online purchase diversity and indirect predictor of the amount of money spent online in both the U.S. and China. And it is a direct predictor of online purchase frequency in the U.S., but an indirect predictor of purchase frequency in China. Trust in online retailers is positively related to the amount of money spent on Amazon/Tmall indirectly by affecting shopping frequency on Amazon/Tmall.

Committee:

Louisa Ha, Professor (Advisor); Gi Woong Yun, Associate Professor (Committee Member); Lisa Hanasono, Associate Professor (Committee Member); Philip Titus, Associate Professor (Committee Member)

Subjects:

Comparative; Information Technology; Marketing; Mass Communications; Mass Media; Social Research

Keywords:

interactivity; online trust; product reviews;e-commerce;social inequality;culture;comparative study;China;US

Mohd Faseeh, FnuProbabilistic Smart Terrain Algorithm
Master of Computing and Information Systems, Youngstown State University, 2016, Department of Computer Science and Information Systems
Smart Terrain is an algorithm that is used to find the object that meets the needs transmit signal to the non-player character with those needs influencing the character to move towards those objects. We describe how the probabilistic reasoning can be implemented on it deploying the object that it may meet a need with a given probability. The expected distance can be measured in terms of probability and distance that meet the needs, allowing the non-player character to follow the route in the game. This algorithm can be used to manage a character’s need to direct them which objective has a priority or which objectives are profitable to them. With a smart terrain, this algorithm defines how to find the goal in terms of probability and distance. This algorithm defines how the character is behaving as Human for making decisions. We implement the algorithm as a Unity 3D Game using Waypoint and Navigation Mesh where the objective is to find and collect some valuable objects and stay away from the guards guarding the objects, while navigating in a maze like game world. The algorithm finds a path based on the concept of adjacent routes in the game such that it makes it difficult for the player to stay away from the guards. The player on the other hand is controlled by the user. The algorithm searches for possible paths and then makes a decision based on the calculations on probabilities and distances as discussed in detail in the paper. Several features other than path finding such as ray casting and navigation mesh are also implemented to make the game feel life like. The guards behave intelligently and the algorithm changes the probabilities of player being in a particular area of the game world with time. This makes the game even tough to win.

Committee:

John Sullins, Ph.D. (Advisor); Alina Lazar, Ph.D. (Committee Member); Abdu Arslanyilmaz, Ph.D. (Committee Member)

Subjects:

Computer Science; Educational Technology; Information Technology; Web Studies

Keywords:

smart terrain, waypoints, navigation mesh, ray cast

Schafer, Sarah E.Technology Systems and Practices in Transportation and Logistics: Exploring the Links Toward Competitive Advantage in Supply Chains
Doctor of Philosophy, University of Toledo, 2015, Manufacturing and Technology Management
Higher demands for a variety of products add not only to the complexity of coordinating a supply chain, but also to the number of freight movements to support those demands. The increased demand for moving materials and goods contributes to higher levels of congestion and pollution during a time when businesses, customers and governments are increasingly concerned with reducing carbon footprints. To this end, new technologies and data capabilities are emerging that can add integrated visibility (monitoring and tracing), efficiency and even sustainability within the supply chain in order to mitigate these issues and cultivate an ever desired competitive advantage. Companies continuously look for innovative ways to evolve and compete within their dynamic environments. One untapped area that can provide a significant source of competitive advantage is within the complex supplier network and distribution channels; specifically, within the logistics and transportation functions. In an era of increasingly complex supplier network relationships, there is a growing need to connect and automate the extended supply chain between organizations. Applications of information technologies (IT) are seen as key enablers to mitigate these issues, yet widespread use is not evident between trade partners and transportation providers. Applications of IT enabled systems (i.e. intelligent transportation systems for freight and transportation management systems) and practices (i.e. integrated information sharing and third party provided supply chain and logistics managers) can be used to improve efficiencies, reliability, and reduce carbon effects of freight movements. Benefits derived from the movement of freight can, in turn, benefit the wider supply chain through faster response times and lower holding costs realized from reduced inventories. Drawing on contingency theory and organizational information processing theory, this research conceptualizes a model to study the relationships between the major constructs (1) External Environmental Pressures, (2) Internal Organizational Environment, (3) IT Enabled Systems and Practices, (4) Transportation Outcomes, and (5) Competitive Advantage of the Supply Chain. Examining transportation as the link between enterprises in the supply chain is not well understood. This work is expected to open a new area for examining the interfaces between organizations in order to improve overall performance for supply and distribution networks. The development of a reliable instrument to test these relationships will contribute to research and practice. Hypothesized relationships were tested through a combined statistical analysis of primary data collected from 260 transportation providers. By providing researchers with a better understanding of contextual factors that drive organizational technology adoption, it will become easier to identify factors of success for future innovative technology initiatives, particularly pertaining to the transportation and logistics industry. Moreover, managers are expected to find results from evaluating specific types of IT enabled systems and practices particularly useful as they provide metrics for evaluating investments in those systems and practices based on performance measures for transportation outcomes in efficiency, reliability, responsiveness, quality, carbon emissions reduction, and equipment utilization. Results indicate that some IT enabled systems and practices, mainly intelligent transportation systems for freight and integrated information sharing, do positively impact transportation outcomes. Other IT enabled systems and practices were found to have weak impacts (i.e. using a transportation management system) or non-significant relationships (i.e. using a third party provided supply chain and logistics manager). Implications for these findings are discussed. Finally, results indicate a strong relationship between positive transportation outcomes and the competitive advantage of the supply chain network. Thus indicating the importance of utilizing transportation providers to differentiate service offerings and build a competitive advantage for the supply chain. Contributions to research and implications of these results for practice are discussed.

Committee:

Mark Vonderembse, Ph.D. (Committee Chair); Peter Lindquist, Ph.D. (Committee Member); Thomas Sharkey, Ph.D. (Committee Member); P. Sundararaghavan, Ph.D. (Committee Member)

Subjects:

Business Administration; Information Technology; Transportation

Keywords:

Supply chain management, transportation, logistics, information technology applications

Sinha, VinayakSentiment Analysis On Java Source Code In Large Software Repositories
Master of Computing and Information Systems, Youngstown State University, 2016, Department of Computer Science and Information Systems
While developers are writing code to accomplish the task assigned to them, their sentiments play a vital role and have a massive impact on quality and productivity. Sentiments can have either a positive or a negative impact on the tasks being performed by developers. This thesis presents an analysis of developer commit logs for GitHub projects. In particular, developer sentiment in commits is analyzed across 28,466 projects within a seven-year time frame. We use the Boa infrastructure’s online query system to generate commit logs as well as files that were changed during the commit. Two existing sentiment analysis frameworks (SentiStrength and NLTK) are used for sentiment extraction. We analyze the commits in three categories: large, medium, and small based on the number of commits using sentiment analysis tools. In addition, we also group the data based on the day of week the commit was made and map the sentiment to the file change history to determine if there was any correlation. Although a majority of the sentiment was neutral, the negative sentiment was about 10% more than the positive sentiment overall. Tuesdays seem to have the most negative sentiment overall. In addition, we do find a strong correlation between the number of files changed and the sentiment expressed by the commits the files were part of. It was also observed that SentiStrength and NLTK show consistent results and similar trends. Future work and implications of these results are discussed.

Committee:

Bonita Sharif, PhD (Advisor); Alina Lazar, PhD (Committee Member); John Sullins, PhD (Committee Member)

Subjects:

Computer Science; Information Technology; Organizational Behavior

Keywords:

Sentiment Analysis; Emotions; Commit logs; Java projects; Large Software Repositories

Albahli, Saleh MohammadOntology-based approaches to improve RDF Triple Store
PHD, Kent State University, 2016, College of Arts and Sciences / Department of Computer Science
The World Wide Web enables an easy, instant access to a huge quantity of information. Over the last few decades, a number of improvements have been achieved that helped the web reach its current state. However, the current Internet links documents together without understanding them, and thus, makes the content of web only human-readable rather than machine-understandable. Therefore, there is a growing need for an efficient web to make information machine understandable rather than only machine processable to reach to the web of knowledge. To cure this problem, the Semantic Web or what is called “web of meaning” tries to shift the thinking of published data in the form of web pages to allow machines to understand the contents. That is, computers are able to interoperate and think on our behalf, opening up several different perspectives. However, with the increasing quantity of semantic data, there is a need for efficient and scalable performance from semantic repositories which store and from which must be retrieving a large datasets contain Resource Description Framework -RDF- triples. This is a major obstacle to reaching the goal of the Semantic Web, and this problem is magnified by the unpredictable nature of the data encoded in RDF. Additionally, current RDF stores, in general, scale poorly, which may exacerbate the performance behavior for querying and retrieving RDF triples. As a consequence, we proposed new semantic storage models for managing RDF data in relational databases to show how a state-of-the-art scaling method can be improved with ontology-based techniques for speed and high scalability.

Committee:

Austin Melton (Committee Chair); Angela Guercio (Committee Member); Ye Zhao (Committee Member); Alan Brandyberry (Committee Member); Mark Lewis (Committee Member)

Subjects:

Computer Science; Information Technology

Keywords:

Semantic Web, RDF data management, Triple Store, Ontology, FCA, Relational Database

Church, Donald GlenReducing Error Rates in Intelligence, Surveillance, and Reconnaissance (ISR) Anomaly Detection via Information Presentation Optimization
Master of Science in Industrial and Human Factors Engineering (MSIHE) , Wright State University, 2015, Industrial and Human Factors Engineering
In the ISR domain, time-critical decision-making and dealing with multiple information feeds places high demands on the human. When designing aids and tools, the decision maker must be taken into account. This research looks toward designing a decision aid based the personality type of the operator. The BFI is used to determine the impact of personality and decision aid type (graphical vs. textual) on performance. Results show Openness and Agreeableness to be the strongest single factors for decision aid impact on performance. A model was also developed to show how the human takes the information and relates it to a mental model for use in making an identification. This can assist the ISR community in developing an adaptive aiding system to reduce the cycle time in the decision making process and have the greatest impact on performance.

Committee:

Mary Fendley, Ph.D. (Advisor); Richard Warren, Ph.D. (Committee Member); Pratik Parikh, Ph.D. (Committee Member)

Subjects:

Engineering; Industrial Engineering; Information Technology; Personality Psychology

Keywords:

Personality; BFI; Big Five; ISR; Intelligence; Surveillance; Reconnaissance; SDT; Signal detection; visual aid; graphical aid; textual aid; interface design; perception; cognitive fit; perception model; information processing; human factors

AYDAR, MEHMETDeveloping a Semantic Framework for Healthcare Information Interoperability
PHD, Kent State University, 2015, College of Arts and Sciences / Department of Computer Science
Interoperability in healthcare is stated as the ability of health information systems to work together within and across organizational boundaries in order to advance the effective delivery of healthcare for individuals and communities. The current healthcare information technology environment breeds incredibly complex data ecosystems. In many cases pertinent patient records are collected in multiple systems, often supplied by competing manufacturers with diverse data formats. This causes inefficiencies in data interoperability, as different formats of data create barriers in exchanging health information. This dissertation presents a semantic framework for healthcare information interoperability. We propose a system for translation of healthcare instance data, based on structured mapping definitions and using RDF as a common information representation to achieve semantic interoperability between different data models. Moreover, we introduce an entity similarity metric that utilizes the Jaccard index with the common relations of the data entities and common string literal words referenced by the data entities and augmented with data entity neighbors similarity. The precision of the similarity metric is enhanced by incorporating the auto-generated importance weights of the entity descriptors in the RDF representation of the dataset. Furthermore, we provide an automatic classification method, which we call summary graph generation, based on the pairwise entity similarities, and we propose that the summary graph can further be utilized for interoperability purposes. Finally, we present a suggestion based semi-automatic instance matching system and we test it on the RDF representation of a healthcare dataset. The system utilizes the entity similarity metric, and it presents similar node pairs to the user for possible instance matching. Based on the user feedback, it merges the matched nodes and suggests more matching pairs depending on the common relations and neighbors of the already matched nodes. We propose that the instance matching technique could be leveraged for mapping between separate data models.

Committee:

Austin Melton (Advisor); Angela Guercio (Committee Member); Ye Zhao (Committee Member); Alan Brandyberry (Committee Member); Helen Piontkivska (Committee Member); Javed I. Khan (Committee Chair); James L. Blank (Other)

Subjects:

Computer Science; Health Care; Health Sciences; Information Systems; Information Technology; Medicine

Keywords:

Healthcare Information Interoperability;Semantic Web;RDF;Translation of Instance Data;Summary Graph;RDF Instance Match;RDF Entity Similarity;Automatic Mapping;Information Translation

Lipkin, IlyaTesting Software Development Project Productivity Model
Doctor of Philosophy in Manufacturing and Technology Management, University of Toledo, 2011, Manufacturing and Technology Management

Software development is an increasingly influential factor in today’s business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted.

There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis.

This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD.

Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers.

Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc¿¿¿ This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

Committee:

Jeen Su Lim (Committee Chair); James Pope (Committee Member); Michael Mallin (Committee Member); Michael Jakobson (Committee Member); Wilson Rosa (Advisor)

Subjects:

Aerospace Engineering; Armed Forces; Artificial Intelligence; Business Administration; Business Costs; Computer Engineering; Computer Science; Economic Theory; Economics; Electrical Engineering; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology; Management; Marketing; Mathematics

Keywords:

"Software Estimation"; "Software Cost Model"; "Department of Defense Data"; COCOMO; "Software Project Productivity Model"

Chen, WeiDeveloping a Framework for Geographic Question Answering Systems Using GIS, Natural Language Processing, Machine Learning, and Ontologies
Doctor of Philosophy, The Ohio State University, 2014, Geography
Geographic question answering (QA) systems can be used to help make geographic knowledge accessible by directly giving answers to natural language questions. In this dissertation, a geographic question answering (GeoQA) framework is proposed by incorporating techniques from natural language processing, machine learning, ontological reasoning and geographic information system (GIS). We demonstrate that GIS functions provide valuable rule-based knowledge, which may not be available elsewhere, for answering geographic questions. Ontologies of space are developed to interpret the meaning of linguistic spatial terms which are later mapped to components of a query in a GIS; these ontologies are shown to be indispensable during each step of question analysis. A customized classifier based on dynamic programming and a voting algorithm is also developed to classify questions into answerable categories. To prepare a set of geographic questions, we conducted a human survey and generalized four categories that have the most questions for experiments. These categories were later used to train a classifier to classify new questions. Classified natural language questions are converted to spatial SQLs to retrieve data from relational databases. Consequently, our demo system is able to give exact answers to four categories of geographic questions within an average time of two seconds. The system has been evaluated using classical machine learning-based measures and achieved an overall accuracy of 90% on test data. Results show that spatial ontologies and GIS are critical for extending the capabilities of a GeoQA system. Spatial reasoning of GIS makes it a powerful analytical engine to answer geographic questions through spatial data modeling and analysis.

Committee:

Eric Fosler-Lussier, Dr. (Committee Member); Rajiv Ramnath, Dr. (Committee Member); Daniel Sui, Dr. (Committee Member); Ningchuan Xiao, Dr. (Committee Chair)

Subjects:

Cognitive Psychology; Computer Science; Geographic Information Science; Geography; Information Science; Information Systems; Information Technology; Language

Keywords:

geographic information system; GeoQA; geographic question answering framework; geolinguistics; spatial ontologies;

MacRobbie, Danielle ElizabethAn Investigation of Technological Impressions in Steve Reich and Beryl Korot's Three Tales
Master of Music (MM), Bowling Green State University, 2013, Music History
The impact of technology upon the twentieth century and the influence it continues to exert upon the present human community is self-evident. The allure and power of technology are broadcast via the grandest media and performance entertainment, while on the opposite spectrum, technology is being continually refined to render its electro-mechanical or bio-technical feats for humans. It is this theme of the increasing growth and import of technology upon every facet of human life that serves as the subject of Three Tales, a twenty-first century documentary digital video opera by composer Steve Reich and video artist Beryl Korot. In this work, Reich and Korot confront society's negligence of particular directions that technological development and application have undergone in the past century, and advise against taking the same paths in the coming era. Even as modern technology is critiqued in Three Tales, the work itself bends to accept the reality of technology's significance upon modern thought and life. In keeping with Reich and Korot's categorization of the work as a "documentary digital video opera," Three Tales is a performance work heavily dependent upon technology for its generation, presentation, and discussion of the interchange between technology and humankind. This thesis will investigate how technology has shaped the course of an artwork whose purpose is to expose and debate the handling of technology in current society. Technology in Three Tales is examined from various perspectives. Chapter one presents the foundational role of technology as "tool," "subject," and "theme." Chapter two considers how visual and audio technologies are used in Three Tales to suggest the effects technology may have upon perceptions of human connectedness and isolation. Chapter three investigates the inherent paradox in Three Tales that occurs from using technological devices for the work's production while its theme critiques modern, technological advances. The chapter also considers the influence technology has upon the formation of Three Tales's generic identification.

Committee:

Eftychia Papanikolaou (Advisor); Alexa Woloshyn (Committee Member); Mary Natvig (Committee Member)

Subjects:

Biology; Ethics; History; Information Technology; Medical Ethics; Military History; Minority and Ethnic Groups; Music; Nanotechnology; Robotics; Robots; Spirituality; Technology; Theology

Keywords:

Steve Reich; Beryl Korot; Three Tales; Technology; Hindenburg zeppelin; Bikini Atoll; Cloning; electronic music; IRCAM; freeze frame sound; new music theater; Kismet; human connectedness; human isolation; technology and art; art and politics; paradox

Prempeh, James AgyemanDynamic Culture-Centered Design for User Empowerment, with Applications to Techno-Culture in Ghana
Master of Technical and Scientific Communication, Miami University, 2011, English
This paper explores why and how dynamic approaches to Culture-Centered Design can help designers conceive of, and develop, technologies effective at empowering users in specific cultural contexts. In the context of developments in the objective and theories of Culture-Centered Design, I explicate dynamic approaches as those that recognize the dynamic nature of cultural context, the socio-cultural meaning of technologies, and user activity with technology. To illustrate their relevance, these approaches are then applied to the techno-culture of Ghana—as representative of technology challenges and opportunities in Africa—to generate ideas regarding how Ghanaians could be better empowered with effective information technologies.

Committee:

Huatong Sun, PhD (Committee Co-Chair); James Coyle, PhD (Committee Co-Chair); Jean Lutz, PhD (Committee Member)

Subjects:

Cultural Anthropology; Design; Information Technology; Technical Communication; Technology

Keywords:

Culture-Centered Design; Cross-Cultural Design; Internationalization; Ghana; Africa; Information Technology; Usability

Beam, Michael A.Personalized News: How Filters Shape Online News Reading Behavior
Doctor of Philosophy, The Ohio State University, 2011, Communication

The evolution and diffusion of communication technology has consistently changed interactions between members of the public sphere in forming public opinion. Some democratic scholars have worried recent developments in personalization technologies will degrade public opinion formation. They worry that personalized news allows citizens to only pay attention to news coming from their preferred political perspective and may isolate them from challenging perspectives. Empirical research has shown people with access to more highly selective information technology demonstrate increases in both selectivity and incidental exposure to diverse perspectives.

This dissertation focuses on these behavioral and attitudinal outcomes of using personalized news technologies. Dual-processing theories of information provide the foundation for analyzing opinion formation within the bounded rationality model of public opinion. Personalized news technologies are hypothesized to increase the amount of news exposure and elaboration through increased personal relevance.

Two studies test these broad hypotheses. First, results from a national random sample of adults show users of personalized web portals are more likely to engage in increased news viewing both online and offline. No differences in preference for perspective sharing or challenging sources of news is found between personalized portal users and non-users. Next, results from an online experiment of Ohio adult Internet users show an increase in time spent reading news articles in personalized news portals compared with a generic portal. An interaction between using customized news portals with source recommendations based off of explicit user preferences and increased time spent reading per news article is found on news elaboration. No differences in news elaboration are found in other personalized news designs including implicitly recommended news sources based on user profile information and only showing users recommended stories. The implications of these results are discussed in terms of the public opinion debate about new communication technologies, selective exposure research, information processing research, and personalized information system design.

Committee:

Gerald M. Kosicki, PhD (Advisor); David R. Ewoldsen, PhD (Committee Member); R. Kelly Garrett, PhD (Committee Member); Andrew F. Hayes, PhD (Committee Member)

Subjects:

Behavioral Sciences; Behaviorial Sciences; Communication; Experiments; Information Systems; Information Technology; Journalism; Mass Communications; Political Science

Keywords:

Internet; personalized; personalization; news; public opinion; politics; election; selective exposure; information processing; portal; web; communication; elaboration

Heberling, Rachel ElaineObsolete Communication: An Apparition of the Disembodied Hand and Voice
Master of Fine Arts, The Ohio State University, 2011, Art

I propose that we question our suspension of disbelief in email, cell phones, and daily communication devices. As messages are sent and received through wireless signals, invisible words and voices materialize out of the sky, becoming normal by rote familiarity. The presets and automatic corrections of mobile technology (made for the operator by the machine) cause us to take most communication for granted, so that we have become less aware of our technological extensions in the role of thought.

I wish to remove a layer of this familiarity by re-introducing objects from a past era. When taken out of contemporary context by using obsolete, analog devices (that operate at a slower pace with a much more burdensome interface), this eerie integration of hand, voice and machine becomes much more discernable. Older devices are a means to pause and think about how we are still doing the same things, such as typing and sending messages from one machine to the other, but we simply cannot see what happens between them.

In order to explore and make visible these hidden aspects of technology, I have been representing and interacting with communication devices in my artwork through drawing, printmaking, video and performance. In order to signify the disembodied hand and voice, I have drawn dials, buttons, lenses and telephone receivers, disconnected and partially veiled from the operating hand. I have also created a device of impossible communication, to be interacted with and experienced by any willing participants. As early technology exaggerates a now absent modeling after human forms (oddly and appropriately detached), these works evoke a failed connection and represent an absence and yet a presence of the hand and voice.

Committee:

Charles Massey, Jr. (Advisor); Suzanne Silver (Committee Member); Sergio Soave (Committee Member); Mary Jo Bole (Committee Member)

Subjects:

Communication; Fine Arts; History; Information Technology; Performing Arts; Technology

Keywords:

printmaking; typewriters; telephones; obsolete; analog; communication technology; drawing; video

Gadapa, ShaliniAssessing SeeIT 3D, A Software Visualization Tool
Master of Computing and Information Systems, Youngstown State University, 2012, Department of Computer Science and Information Systems
Software is inherently complex. This is especially true for large open-source systems. Over the past two decades there has been a number of software visualization tools proposed in the literature. The main idea behind creating a software visualization tool is to help a developer or maintainer comprehend the system at different levels of abstraction. Most of the tools have focused on creating elaborate and pretty looking visualizations. There have not been many cases where a tool is systematically empirically validated to make sure that it is really useful to a developer. This thesis tries to bridge this gap between the tool and its empirical validation by assessing one such software visualization tool, SeeIT 3D. Sixteen different tasks are developed in the context of understanding an open-source system, JFreeChart, written in Java. Ten subjects were recruited and an observational study was performed. The main goal was to determine the effectiveness of SeeIT 3D while performing typical software tasks when using the visualization within the Eclipse IDE. Results and observations are presented. These results will be provided as feedback to the tool developers, who may use it in further improving SeeIT 3D.

Committee:

Bonita Sharif, PhD (Advisor); John Sullins, PhD (Committee Member); Yong Zhang, PhD (Committee Member)

Subjects:

Computer Science; Information Systems; Information Technology

Keywords:

visualization tool; visualizing JFreeChart in SeeIT 3D; SeeIT 3D metaphor; polycylinders (visual type relations)

Garcia, Michael ErikThe Economics of Data Breach: Asymmetric Information and Policy Interventions
Doctor of Philosophy, The Ohio State University, 2013, Agricultural, Environmental and Developmental Economics
Large public and private costs result from attacks on firms’ information technology networks. Successful attacks result in data breaches with private damages from business interruption, reputation, and investigation forensics. Social losses result from exposing individuals’ personal information, leading to state, national, and international policymakers enacting legislation to manage these costs. Inadequate economic modeling exists to analyze this phenomenon, despite the large economic impact of cyberspace, e-commerce, and social networking. This research advances information security economics by deviating from a firm-level model to focus on the social welfare implications of firm and regulator decisions. I comprehensively review the economic and policy environment and develop the first rigorous economic model of regulatory approaches to data breach. I develop a one-period model of information security and analyze the efficacy of regulatory interventions in the face of asymmetric information. The model builds upon existing models of firm and firm-consumer information security investment and draws analogy between information security and managing asymmetric information in the biosecurity and livestock disease literature. I analyze firm and social planner incentives in a non-regulatory environment and three regulatory environments. Without regulation, the firm underinvests in network and data protection relative to the social optimum. In the first regime, the regulator must expend a fixed cost to observe social losses and overcome the firm’s moral hazard. The interaction between network and data protection permits the regulator to induce optimal behavior in two investment decisions with a single regulatory instrument. With sufficiently low regulatory costs, this result is socially preferred. In the second regulatory regime, the regulator must expend the same fixed cost for imperfect observation of social losses and administer a program requiring that the firm report breaches. The regulator can induce reporting with a sufficiently large fine for non-reporting, even with imperfect breach monitoring. In this regime, a disclosure investigation cost distorts the firm’s investment incentives in a manner inconsistent with social objectives, resulting in increased network protection at the expense of data protection. With a sufficiently high disclosure investigation cost, the firm will invest less in data protection than it would in lieu of regulation. The final regime introduces a data protection technology that mitigates social loss and some private damages. The regulator expends the same fixed cost for imperfect observation of social losses and requires disclosure only if the firm does not invest in the safe harbor technology. Except when very costly, this safe harbor technology allows the regulator to induce optimal investment and lower the firm’s regulatory burden. The safe harbor technology results in welfare gains except when the technology is very costly, at which point the firm may exit, or the safe harbor regime defaults to the distorted incentives of the disclosure policy. This research advances economic modeling in the relatively undeveloped field of information security economics. As policy aspects of information security become more developed, policymakers will require better tools to analyze policy impacts on both the firm’s wealth and on social welfare. This research provides a step toward those improved tools.

Committee:

Brian Roe, Ph.D. (Advisor); Sathya Gopalakrishnan, Ph.D. (Committee Member); Ian Sheldon, Ph.D. (Committee Member)

Subjects:

Economics; Information Technology

Keywords:

cybersecurity; cyber security; data breach; economics; data breach notification; information security; information security economics

Kanaparthi, Pradeep KumarDetection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles
Master of Science, University of Toledo, 2012, Electrical Engineering

The aim of this thesis is to develop and implement an algorithm that automatically detects and recognizes U.S. speed signs, from the grayscale images captured by a camera mounted on the interior mirror of a vehicle, as a part of designing smarter vehicles. The system operates in real-time within the computational limits of contemporary embedded general purpose processors. This system will assist the driver by providing the necessary information, regarding the assigned speed limits, right in front of him and provide additional safety measures by monitoring the vehicle’s speed.

The proposed method consists of two phases in it: a detection phase, in which all the possible speed signs in the input image are detected first, and a recognition phase, in which the detected regions are recognized and the information regarding the speed limits is extracted from them. The detection phase utilizes the region characteristics, such as aspect ratio and size, to hypothesize the speed sign locations in the input image. We have utilized the idea of connected component labeling technique and adapted it for the grayscale images, to divide the input image into a set of regions. The recognition phase calculates the invariant features of the inner parts of the detected regions using Hu’s moments. It verifies the hypothesis first, before extracting the assigned speed limit from the detected region using a feed forward neural network. The proposed method was experimented on a number of traffic images and the results show that the region characteristics are more immune to different noisy conditions such as partial occlusions, cluttered backgrounds and deformations.

Committee:

Ezzatollah Salari (Advisor); Kim Junghwan (Committee Member); Jackson Carvalho (Committee Member); Ezzatollah Salari (Committee Chair)

Subjects:

Engineering; Information Technology; Technology

Keywords:

Speed sign; connected component labeling; regions; optical character recognition; neural network.

Mathur, KushMathematical Models and Genetic Algorithm Approaches to Simultaneously Perform Workforce Overtime Capacity Planning and Schedule Cells
Master of Science (MS), Ohio University, 2012, Industrial and Systems Engineering (Engineering and Technology)
The problem studied in this thesis was observed in an actual textile company. The problem is more complex than usual scheduling problems in that we compute overtime requirements and make scheduling decisions simultaneously. Since having tardy jobs is not desirable, overtime work is allowed to minimize the number tardy jobs or total tardiness. Two different problems are considered; Problem1, to maximize the total profits by delivering jobs on or before time. The tardy jobs in this case are considered as lost sales. Problem2, to minimize the total tardiness and overtime costs. In this case tardy jobs are delivered with associated tardiness penalty costs. In problem1, various mathematical models are presented reflecting different overtime workforce hiring practices. To solve the same problem for one particular hiring policy, a Genetic Algorithm (GA) approach is also discussed. GA includes some newly proposed mutation operators, dynamic and twin. The proposed twin mutation strategy produced the best results in all problem sizes. Mathematical Model 2 was the best mathematical model with respect to both profit and execution time. This model considered partial overtime periods and also allowed different overtime periods on cells. In problem2, a mathematical model is presented to solve this complex problem. Experimentation has been carried out using three different problem types with five instances each based on the data collected from the company. For most problems, the mathematical model gave results in seconds.

Committee:

Gursel Suer, PhD (Advisor); Dusan Sormaz, PhD (Committee Member); Tao Yuan, PhD (Committee Member); Faizul Huq, PhD (Committee Member)

Subjects:

Applied Mathematics; Engineering; Industrial Engineering; Information Science; Information Systems; Information Technology

Keywords:

Scheduling; Genetic Algorithm; Mathematical Model; decision making

Young, William F1:1 Laptops in Education and Achievement Test Results in One Rural High School
Doctor of Education (Educational Leadership), Youngstown State University, 2017, Department of Educational Foundations, Research, Technology and Leadership
The purpose of the study was to explore the relationship between a 1:1 laptop program and the achievement test results for the Ohio Graduation Tests (OGT). Two cohorts were examined (N=193): 1. Tenth graders who took the OGT subtests in Reading, Writing, Math, Science, and Social Studies in 2014 (n=109) and who had received traditional instruction and 2. Tenth graders who were given individual laptops and eTexts to use at school and at home, and who took the same OGT tests in 2015 (n=84). A Chi Square statistical assessment was conducted to compare student performance. No statistical difference was evident for overall passage rates when comparing the two cohorts. For the laptop cohort, there was no statistical difference in the expected counts for the subject areas of Writing, Science, and Social Studies. For Reading, laptop cohort scores reflect a trend, with scores moving upward into the Accelerated performance category. Math scores showed significantly more scores falling in the highest performance category of Advanced in comparison to what was expected. Similarly, when looking at the economically disadvantaged subgroup within the laptop cohort (n=29), a positive and significant difference from what was expected occurred within the Advanced category for Math, while a trend toward significance for improved performance occurred for Reading scores. The potential for significant gains in student achievement is evident. Additional longitudinal research is warranted to better understand the significance of impact as pedagogical practices develop following initial implementation and considering contextual factors.

Committee:

Jane Beese, Ed.D. (Committee Chair); Charles Vergon, J.D. (Committee Member); Karen Giorgetti, Ph.D. (Committee Member); I-Chun Tsai, Ph.D. (Committee Member)

Subjects:

Educational Leadership; Educational Technology; Educational Tests and Measurements; Information Technology; Mathematics Education; School Administration; Technology

Keywords:

1 to 1 laptops; laptops in education; laptops and achievement tests; ubiquitous technologies; technology and education; laptops; achievement tests; rural schools; change in education; 1 to 1 technology, one to one; laptop programs; mathematics education

Chen, LuMining and Analyzing Subjective Experiences in User Generated Content
Doctor of Philosophy (PhD), Wright State University, 2016, Computer Science and Engineering PhD
Web 2.0 and social media enable people to create, share and discover information instantly anywhere, anytime. A great amount of this information is subjective information -- the information about people's subjective experiences, ranging from feelings of what is happening in our daily lives to opinions on a wide variety of topics. Subjective information is useful to individuals, businesses, and government agencies to support decision making in areas such as product purchase, marketing strategy, and policy making. However, much useful subjective information is buried in ever-growing user generated data on social media platforms, it is still difficult to extract high quality subjective information and make full use of it with current technologies. Current subjectivity and sentiment analysis research has largely focused on classifying the text polarity -- whether the expressed opinion regarding a specific topic in a given text is positive, negative, or neutral. This narrow definition does not take into account the other types of subjective information such as emotion, intent, and preference, which may prevent their exploitation from reaching their full potential. This dissertation extends the definition and introduces a unified framework for mining and analyzing diverse types of subjective information. We have identified four components of a subjective experience: an individual who holds it, a target that elicits it (e.g., a movie, or an event), a set of expressions that describe it (e.g., "excellent", "exciting"), and a classification or assessment that characterize it (e.g., positive vs. negative). Accordingly, this dissertation makes contributions in developing novel and general techniques for the tasks of identifying and extracting these components. We first explore the task of extracting sentiment expressions from social media posts. We propose an optimization-based approach that extracts a diverse set of sentiment-bearing expressions, including formal and slang words/phrases, for a given target from an unlabeled corpus. Instead of associating the overall sentiment with a given text, this method assesses the more fine-grained target-dependent polarity of each sentiment expression. Unlike pattern-based approaches which often fail to capture the diversity of sentiment expressions due to the informal nature of language usage and writing style in social media posts, the proposed approach is capable of identifying sentiment phrases of different lengths and slang expressions including abbreviations and spelling variations. Unlike supervised approaches which require data annotation when applied to a new domain, the proposed approach is unsupervised and thus is highly portable to new domains. We then look into the task of finding opinion targets in product reviews, where the product features (product attributes and components) are usually the targets of opinions. We propose a clustering approach that identifies product features and groups them into aspect categories. Unlike many existing approaches that first extract features and then group them into categories, the proposed approach identifies features and clusters them into aspects simultaneously. In addition, prior work on feature extraction tends to require seed terms and focuses on identifying explicit features, while the proposed approach extracts both explicit and implicit features and does not require seed terms. Finally, we study the classification and assessment of several types of subjective information (e.g., sentiment, political preference, subjective well-being) in two specific application scenarios. One application is to predict election results based on analyzing the sentiments of social media users towards election candidates. Observing that different political preference and tweeting behavior of users may have significant effect on predicting election results. We propose methods to group users based on their political preference and participation in the discussion, and assess their sentiments towards the candidates to predict the results. We examine the predictive power of different user groups in predicting the results of 2012 U.S. Republican Presidential Primaries. The other application is to understand the relationship between religiosity and subjective well-being (or happiness). We analyze the tweets and networks of more than 250k U.S. Twitter users who self-declared their beliefs. We build classifiers to classify believers of different religions using the self-declared data. In order to understand the effect of religiosity on happiness, we examine the pleasant/unpleasant emotional expressions in users' tweets to estimate their subjective well-being, and investigate the variations in happiness among religious groups. This dissertation focuses on developing methods that require minimal human supervision or labeling effort (e.g., unsupervised methods, or supervised methods using self-labeled data), which can be easily applied to new domains for many applications. The effectiveness of these methods has been demonstrated through the evaluation on real world datasets of user generated content from different domains.

Committee:

Amit Sheth, Ph.D. (Advisor); Krishnaprasad Thirunarayan, Ph.D. (Committee Member); Keke Chen, Ph.D. (Committee Member); Ingmar Weber, Ph.D. (Committee Member); Justin Martineau, Ph.D. (Committee Member)

Subjects:

Computer Science; Information Science; Information Technology

Keywords:

subjective experience, subjective information, sentiment analysis, opinion mining, context-dependency, user generated content, social media

Pathak, AmitForecasting Models to Predict EQ-5D Model Indicators for Population Health Improvement
Master of Science (MS), Ohio University, 2016, Industrial and Systems Engineering (Engineering and Technology)
The healthcare sector possesses big issues needing to be addressed in a number of nations including the United States. Problems within and effecting healthcare arena are complex as they are interdependent on several factors. It. To cope this situation and find solutions, best of predictions backed by data for effective decision making are required. Even though predictions are made, it takes extreme cautiousness to make claims for policy inaction. The EuroQol five Dimension (EQ-5D) questionnaire developed by the Euro-Qol group is one of the most widespread used tools assessing the generic health status of a population using 5 dimensions namely mobility, self-care, usual activities, pain/discomfort and anxiety/depression. This thesis develops a methodology to create forecasting models to predict these EQ-5D model indicators using chosen 65 indicators, capable of defining population health, from the World Bank, World Health Organization and the United Nations Development Programme databases. The thesis provides the capability to gauge an insight into the well-being at individual levels of population by maneuvering the macroscopic factors. The analysis involves data from 12 countries namely Argentina, Belgium, Denmark, Finland, France, Germany, Italy, Netherlands, Slovenia, Spain and United States, for both sexes with ages ranging from 18 to 75+. The models are created using Artificial Neural Networks (ANN) and are contrasted with statistical models. It is observed that the ANN model with all 65 indicators performed the best and the age group of 75+ was found to be the most correlated with EQ-5D dimensions. Conclusively the research also provides with the countries and indicators that need the most attention to improve the corresponding EQ-5D parameter. This thesis aims at fostering better policy making for increasing well-being of populations by understanding the impact of predominating factors affecting population health.

Committee:

Gary Weckman (Advisor); Diana Schwerha (Committee Member); Tao Yuan (Committee Member); Andy Snow (Committee Member)

Subjects:

Aging; Artificial Intelligence; Behavioral Psychology; Behavioral Sciences; Behaviorial Sciences; Cognitive Psychology; Demographics; Demography; Developmental Psychology; Economics; Educational Tests and Measurements; Evolution and Development; Finance; Gender Studies; Health; Health Care; Health Care Management; Health Sciences; Higher Education; Industrial Engineering; Information Science; Information Systems; Information Technology; Literacy; Mental Health; Public Health; Public Policy; Sanitation; Social Psychology; Social Research; Statistics; Sustainability

Keywords:

Population Health;EQ-5D;Forecasting;Policy;Artificial Neural Networks;Statistics;Healthcare;Visualize;Data; Decision;World Bank,World Health Organization;WHO;United Nations Development Programme;UNDP; Indicator;Regression;Exploratory;Principal Components

Chen, ZhuoAn Agent-Based Model for Information Diffusion Over Online Social Networks
MA, Kent State University, 2016, College of Arts and Sciences / Department of Geography
Nowadays, social networks services such as Facebook, Twitter, Instagram, etc. have become popular platforms for either celebrities, news media, organizations, governors or general public to express their ideas and opinions. They have created a great opportunity for researchers to explore how information spread through online social networks. Benefited from this, this thesis studies the efficient way of information diffusion on online social networks using the approach of agent-based modeling (ABM). A NetLogo ABM was created to conduct the experiments and analyses, along with the real network dataset retrieved from Twitter. It shows that with the same number of nodes and edges, the network having higher average path length or lower average clustering coefficient tends to have wider information diffusion. In addition, how to locate optimal early adopters in order to satisfy efficient information diffusion mainly depends on the network structure and propagation probabilities among individuals in the network. This thesis aims at contributing to studies of online social networks on information diffusion from the perspective of efficient diffusion with agent-based modeling and simulation. Application of this thesis could benefit those from business or government who want to disseminate advertisement/information in a fast and economic way. Outcomes from this study should also provide hints to the geography likely behind information diffusion in social networks.

Committee:

Jay Lee (Advisor); Xinyue Ye (Committee Member); Eric Shook (Committee Member)

Subjects:

Geography; Information Science; Information Technology

Keywords:

information diffusion, agent-based modeling, social network, Twitter, efficient information diffusion

Next Page