Search Results (1 - 25 of 93 Results)

Sort By  
Sort Dir
 
Results per page  

Elango, VenmugilTechniques for Characterizing the Data Movement Complexity of Computations
Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering
The execution cost of a program, both in terms of time and energy, comprises computational cost and data movement cost (e.g., cost of transferring data between CPU and memory devices, between parallel processors, etc.). Technology trends will cause data movement to account for the majority of energy expenditure and execution time on emerging computers. Therefore, computational complexity alone will no longer be a sufficient metric for comparing algorithms, and a fundamental characterization of data movement complexity will be increasingly important. In their seminal work, Hong & Kung proposed the red-blue pebble game to model the data movement complexity of algorithms. Using the pebble game abstraction, Hong & Kung proved tight asymptotic lower bounds for the data movement complexity of several algorithms by reformulating the problem as a graph partitioning problem. In this dissertation, we develop a novel alternate graph min-cut based lower bounding technique. Using our technique, we derive tight lower bounds for different algorithms, with upper bounds matching within a constant factor. Further, we develop a dynamic analysis based automated heuristic for our technique, which enables automatic analysis of arbitrary computations. We provide several use cases for our automated approach. This dissertation also presents a technique, built upon the ideas of Christ et al., to derive asymptotic parametric lower bounds for a sub-class of computations, called affine computations. A static analysis based heuristic to automatically derive parametric lower bounds for affine parts of the computations is also presented. Motivated by the emerging interest in large scale parallel systems with interconnection networks and hierarchical caches with varying bandwidths at different levels, we extend the pebble game model to parallel system architecture to characterize the data movement requirements in large scale parallel computers. We provide interesting insights on architectural bottlenecks that limit the performance of algorithms on these parallel machines. Finally, using data movement complexity analysis, in conjunction with the roofline model for performance bounds, we perform an algorithm-architecture codesign exploration across an architectural design space. We model the maximal achievable performance and energy efficiency of different algorithms for a given VLSI technology, considering different architectural parameters.

Committee:

P Sadayappan (Advisor); Fabrice Rastello (Committee Member); Atanas Rountev (Committee Member); Radu Teodorescu (Committee Member)

Subjects:

Computer Science

Keywords:

High-Performance Computing; IO complexity; Data movement complexity; Data access complexity; Red-blue pebble game; Lower bounds

Anekal, PrashanthThe Effects of Product Complexity and Supply Base Complexity on Supply Chain Performance
Doctor of Philosophy, University of Toledo, 2014, College of Business and Innovation
Over the years, we have seen that products manufactured and the supply base of many manufacturers have become more complex. The reasons for the increase in complexity are many. Prominent ones are (a) advances in manufacturing technology; (b) customers' demand for new and improved product functionality; and (c) manufacturers' need to differentiate themselves from their competitors. The resulting increase in complexity can however have negative implications on the performance of the supply chain. As products and supply bases become more complex, the task of managing these complexities and achieving the desired results becomes more challenging. Inability to manage these complexities results in lower performance throughout the supply chain. Thus, we can say that product complexity and supply base complexity are both "necessary evils". Manufacturing literature has recognized that product complexity can have negative effects on plant performance. Emerging studies have explored the negative impacts of product complexity and supply base complexity. However, most of these studies are either conceptual or address narrow aspects of performance, such as delivery performance. In order to bridge this gap, the first aim of this study is to examine the impact of product complexity and supply base complexity on efficiency and responsiveness of the supply chain. Secondly, the study examines the mediating impact of coordination mechanism on the relationship between product complexity / supply base complexity on supply chain performance. Operational coordination and strategic coordination are proposed to be the mediating variables. Thirdly, recognizing the fact that complexity is unavoidable and inevitable in certain circumstances, the study proposes a set of mechanisms that help supply chains improve coordination and thus reduce the negative effects of complexity on supply chain performance. The proposed research model was tested using data collected by a large scale survey of manufacturing firms. The survey was answered by 270 respondents in various managerial roles in purchasing, operations and supply chain functional areas. The study developed and tested measurement instruments for the constructs proposed in the research model. Instruments were tested for reliability and validity using the collected data. The proposed research model was analyzed using Structural Equations Modeling (SEM). The results of the study suggest a negative impact of product complexity and supply base complexity on supply chain performance. The data however shows that product complexity does not have a direct impact on supply chain performance, but rather has an indirect impact through supply base complexity. This indicates that product complexity has an effect on the nature and structure of the supply base. The role of coordination mechanisms (operational and strategic) as a mediator between complexity and supply chain performance was not supported by the data. This indicates a possible moderating role for coordinating mechanisms in this relationship. However, the extent of coordination between supply chain partners was found to be a key determinant of supply chain performance. The role of IT based and non-IT based mechanisms in mitigating the negative impact of complexity on supply chain performance was found to be effective in general. This study thus makes contributions to theory by: (a) developing a research framework that draws from multiple theories to identify the relationships between product complexity, supply base complexity and supply chain performance; (b) identifying the various components of product and supply base complexity in a supply chain system; (c) identifying the strategic and operational roles of coordination mechanisms; and (d) developing and validating measurement instruments that can be employed in future studies. This study can be of interest to supply chain practitioners since it identifies the effects of complexity in the supply chain and identifies mechanisms to manage the effects of complexity in the system. Insights from this study are expected to improve managerial effectiveness in the supply chain.

Committee:

Monideepa Tarafdar (Committee Co-Chair); Ragu-Nathan T.S. (Committee Co-Chair); Thuong Le (Committee Member); Abdollah Afjeh (Committee Member)

Subjects:

Business Administration; Management

Keywords:

Complexity, Product Complexity, Supply Base Complexity, Operational Coordination, Strategic Coordination, Supply Chain Efficiency, Supply Chain Responsiveness, Inter-Organizational Systems

LeMaster, Cheryl FayeLeading Change in Complex Systems: A Paradigm Shift
Ph.D., Antioch University, 2017, Leadership and Change
This qualitative study is an in-depth exploration of the experiences of 20 executive-level leaders from American corporations, government agencies, hospitals, and universities. At the heart of this investigation are stories that reveal the challenge of leading change in complex systems from the leader perspective, creating an opportunity to explore sense-making and sense-giving as guided by individual values and organizational contexts. Complexity Science, the framework for this research, is the study of relationships within and among systems. The aim of approaching this research from a complexity perspective is to gain a more realistic view of the issues and challenges that leaders face during change, and how they make meaning and respond in today’s richly interconnected and largely unpredictable information age. Results highlight the critical role an individual’s beliefs and values—as shaped by experience and guided by context—have on leadership and the organization’s approach to change implementation. This study identifies three leadership conceptual categories: (1) traditional (linear and hierarchical in nature); (2) complexity (non-linear, suited to densely interconnected and rapid-paced environments), and (3) complexity-plus (including change goals beyond the organization and its members). Though traditional and complexity styles are largely known in the literature, the complexity-plus style is a newly identified category. Drawing from Uhl-Bien, Marion, and McKelvey’s (2007) Complexity Leadership Theory (CLT) model, which delineates three leadership functions: (1) administrative (results orientation); (2) adaptive (learning orientation); and (3) enabling (support orientation), the key conclusions of this investigation are integrated with the CLT model to create the Leadership Values Framework. The results of this research contribute to our understanding of the influence of a leader’s values, enhancing our ability as academics and practitioners to better appreciate, support, and develop change leadership in a new paradigm. The electronic version of this dissertation is at AURA: Antioch University Repository and Archive http://aura.antioch.edu/ and OhioLINK ETD Center, https://etd.ohiolink.edu

Committee:

Alan Guskin, Ph.D (Committee Chair); Elizabeth Holloway, Ph.D (Committee Member); Merryn Rutledge, Ed.D (Committee Member); Peter Martin Dickens, Ph.D (Committee Member)

Subjects:

Behavioral Psychology; Organizational Behavior; Social Psychology

Keywords:

Qualitative study; Narrative Inquiry; Leading large-scale change; Leadership approach; Complexity Science; Complexity Theory; Complex Adaptive Systems; Executives; Workplace; Organizations

Lacayo, VirginiaCommunicating Complexity: A Complexity Science Approach to Communication for Social Change.
Doctor of Philosophy (PhD), Ohio University, 2013, Mass Communication (Communication)
This study aims to contribute to the theoretical development and the effective practice of Communication for Social Change by exploring the application of the principles and ideas of Complexity Science to Communication for Social Change endeavors. The study provides a theoretical framework for the analysis of Communication for Social Change initiatives and presents guidelines for organizations, including both practitioner organizations and donor agencies, interested in using Complexity Science principles and ideas to inform their Communication for Social Change strategies. The study employs an interpretive approach and an instrumental case study method of inquiry. Five principles distilled from the literature on Complexity Science are used to identify examples from the work of Puntos de Encuentro, a feminist, non-profit organization working in Communication for Social Change in Central America, in order to illustrate how Complexity Science principles can be applied to Communication for Social Change strategies and to explore possible challenges and implications, for organizations working in the field of Communication for Social Change, of applying these principles in their work. The major conclusions and insights of the study are, first, that Complexity Science can provide social change organizations, development agencies, donors, scholars and policy makers with a useful framework for addressing complex social issues and it may make Communication for Social Change strategies more effective at creating social change, and second, that Communication for Social Change strategies need to be supported by organizational cultures that guarantee a shared vision and directions and promote power decentralization, self-organizing and innovation as this is what provides organizations with the level of flexibility and adaptability required by a continuously changing environment. The study concludes with a set of recommendations that aim to serve as guidelines for Communication for Social Change practitioners and donors when approaching complex social issues, as well as suggestions for future research.

Committee:

Rafael Obregón (Committee Chair); Josep Rota (Committee Member); Arvind Singhal (Committee Member); Lynn Harter (Committee Member); Steve Howard (Committee Member)

Subjects:

Communication; Entrepreneurship; Evolution and Development; Mass Communications; Multimedia Communications; Organizational Behavior; Systems Science

Keywords:

Communication for Social Change; Social Change; Complexity Science; Communication for Development; Communication Strategies for Social Justice; Social Change in Nicaragua, Communication Strategies and NGO; Complexity and International Development

Vinke, Louis NicholasFactors Affecting the Perceived Rhythmic Complexity of Auditory Rhythms
Master of Arts (MA), Bowling Green State University, 2010, Psychology/Experimental
Musical rhythms vary in their complexity. However, how different factors affect the perceived complexity of a rhythm is relatively poorly understood. The primary aim of this thesis was to consider the contribution of three factors to the perceived complexity of a rhythm: (1) musical training, (2) whether or not individuals were asked to tap the beat of the rhythm at a preferred rate before making a complexity rating, and (3) tempo. Of additional interest was the extent to which previously proposed measures of rhythmic complexity can account for variations in perceived rhythmic complexity. In two experiments, participants listened to a set of monotone auditory rhythms and rated their complexity using a 6-point scale: 1-‘Very Simple" to 6-‘Very Complex". In Experiment 1, musically trained and untrained participants were instructed in separate blocks of trials to tap out a regular beat along with the rhythm or to simply listen to the rhythm before making their rating; all rhythms were presented at a fixed tempo (200 ms inter-onset-interval). In Experiment 2, a new sample of musically trained and untrained participants rated the complexity of the most and least complex rhythms in Experiment 1. These rhythms were presented at a range of tempi in both tapping-the-beat and listen-only conditions. Overall, musically untrained participants tended to judge rhythms to be more complex than musically trained participants. In Experiment 1, rhythmic complexity ratings made during the tapping-the-beat condition were significantly higher than ratings made during the listen-only condition; however this was only the case for musically untrained participants. In Experiment 2, rhythmic complexity ratings increased with increasing tempo. Differences in tapping variability as a function of musical training were found, although tempo did not affect participants’ tapping variability in general. Three beat-based measures of rhythmic complexity made reliable and significant predictions of participants’ complexity ratings to varying degrees, and highlight the crucial role of beat perception in the perception of rhythmic complexity.

Committee:

J. Devin McAuley, PhD (Advisor); Verner P. Bingman, PhD (Committee Member); Laura C. Dilley, PhD (Committee Member)

Subjects:

Experiments; Music; Psychology

Keywords:

rhythm perception and cognition; perceived rhythmic complexity; tempo; musical training; beat perception; tapping; measures of complexity; auditory rhythms

Almaghariz, Eyad S.Determining When to Use 3D Sand Printing: Quantifying the Role of Complexity
Master of Science in Engineering, Youngstown State University, 2015, Department of Mechanical and Industrial Engineering
The additive manufacturing industry has the potential to transform nearly every sector of our lives and jumpstart the next Industrial Revolution. Engineers and designers have been using 3D printers for more than three decades but mostly to make prototypes quickly and cheaply before they embark on the expensive business of tooling up a factory to produce the real things. In sand casting industries, a growing number of companies have adopted 3D sand printing to produce final casts. Yet recent research suggests that the use of 3D sand printing has barely begun to achieve its potential market. It is not surprising that executives are having difficulty adopting additive manufacturing; the technology has many second - order effects on business operations and economics. One of the most important factors is the lack of awareness of additive manufacturing's applications and values in the sand casting manufacturing process. The lack of awareness is significantly slowing down the adoption rates. This research will help executives to optimize their adoption decision by answering the question of "At what level of part complexity should sand printing be used instead of the conventional process in molds and cores manufacturing?" Moreover, this thesis defines and analyzes the geometric attributes which influence the parts' complexity. As known in the conventional sand casting process, the high level of complexity leads to higher manufacturing cost. On the other hand, in the additive manufacturing process, the manufacturing cost is fairly constant regardless of the level of complexity. Therefore, 3D sand printing provides a unique advantage that the increasing in geometric complexity of the part has no impact on the molds and cores manufacturing cost or what is known as "complexity for free."

Committee:

Brett Conner, PhD (Advisor); Martin Cala, PhD (Committee Member); Guha Manogharan, PhD (Committee Member)

Subjects:

Industrial Engineering

Keywords:

Geometric complexity vs cost; 3D sand printing cost; Conventional sand casting cost; Complexity vs cost in sand casting; mold and core manufacturing

Islas Munoz, JuanAutomotive design aesthetics: Harmony and its influence in semantic perception
MDES, University of Cincinnati, 2013, Design, Architecture, Art and Planning: Design
Aesthetics play a crucial role in a consumer's purchase decision of a vehicle. While creating aesthetically pleasing vehicle designs is already challenging for automakers, it is even more challenging to do so while constantly being in the cutting edge of design, generating new and fresh aesthetics that allow them to differentiate themselves from other companies and stand out. All those iterations seeking new aesthetics make designers take risks, generating sophisticated and provocative designs that challenge conventional aesthetic features. In addition, design modifications to accommodate manufacturing criteria can potentially disrupt the original design concept. This can result in a controversial design, communicating negative semantic messages to the consumer. This thesis proposes the use of harmony (where the visual unified whole, in which the sensation that every aesthetic feature belongs together is created) as a crucial variable for generating positive semantic messages. A survey was conducted using vehicle images with different levels of harmony and complexity. These images were rated in a positive-negative semantic scale based on concepts related to a design's communication of quality (price, build quality, and design execution), and performance (safety, driveability, driving performance). Results show the importance of creating and preserving harmonious car designs so that the transmission of positive semantics is achieved, which can contribute to a vehicle's commercial success.

Committee:

Peter Chamberlain, M.F.A. (Committee Chair); Raphael Zammit (Committee Member)

Subjects:

Design

Keywords:

automotive design;automotive aesthetics;automotive semantics;automotive design harmony;automotive design complexity;automotive aesthetics harmony complexity;

Singh, ManjeetMathematical Models, Heuristics and Algorithms for Efficient Analysis and Performance Evaluation of Job Shop Scheduling Systems Using Max-Plus Algebraic Techniques
Doctor of Philosophy (PhD), Ohio University, 2013, Mechanial and Systems Engineering (Engineering and Technology)
This dissertation develops efficient methods for calculating the makespan of a perturbed job shop. All iterative scheduling algorithms require their performance measure, usually the makespan, to be calculated during every iteration. Therefore, this work can enhance the efficiency of many existing scheduling heuristics, e.g. Tabu Search, Genetic Algorithms, Simulated Annealing etc. This increased speed provides two major benefits. The first is the capability of searching a larger solution space, and second is the capability to find a better solution due to the extra time. The following is a list of major highlights of this dissertation. The dissertation extends the hierarchical block diagram model formulation and composition that was originally proposed by Imaev[2]. An algorithm is developed that reduces the complexity of calculating the makespan of the perturbed schedule of job shop with no recirculation from O(MNlogMN) to O(N^2), where M is the number of machines and N the number of parts. An efficient algorithm that calculates kleene star of a lower triangular matrix is presented. This algorithm has complexity of O((n^3)/6) which is 1/16th of the traditional approach. Finally, a novel pictorial methodology, called the SBA (Serial Block Addition), is developed to calculate the makespan of a perturbed job shop. A very efficient single perturbed machine scheduling algorithm, with complexity of O(N^2), is derived using the SBA method. The algorithm was tested on 10,000 randomly generated problems. The solutions provided by scheduling algorithm were 95.27% times, within a 3% deviation of the optimal solutions.

Committee:

Robert Judd (Advisor)

Subjects:

Engineering; Industrial Engineering; Mathematics

Keywords:

Job Shop Scheduling; Max Plus Algebra; Mathematical Modeling; Computational Complexity; Makespan; Heursitics; Simulation; Scheduling Algorithm

Phillips, Benjamin WThe Ecological Impacts of Non-Native Annual and Native Perennial Floral Insectaries on Beneficial Insect Activity Density and Arthropod-Mediated Ecosystem Services Within Ohio Pumpkin (Cucurbita pepo) Agroecosystems
Master of Science, The Ohio State University, 2013, Entomology
Pumpkins (Cucurbita pepo) rely on insect-mediated pollination, and host a distinct community of pests, natural enemies and pollinators. My goal was to determine if biocontrol and pollination services in pumpkins were affected by local habitat management and landscape composition in Ohio. I measured biocontrol through predation and parasitism rates of sentinel egg cards of squash bug (Anasa tristis) and the spotted cucumber beetle (Diabrotica undecimpunctata howardii), and collected adults of striped cucumber beetle (Acalymma vittatum) to determine parasitism activity in 2011-2012. I used pitfall traps to determine the activity density of ground-dwelling predators per field per sample period, and video cameras to determine the taxa responsible for egg mortality. I measured visitation frequency and duration of Apis mellifera, Bombus spp., and Peponapis pruinosa in male and female flowers of pumpkins in 2011-2012, and pollen deposition across the pollination window (0600-1200 hr) in 2012. I tested the Intermediate Landscape-Complexity Hypothesis in one year by determining the combined effects of surrounding landscape composition and local habitat management on the relative visit frequency of pollinators, activity density of predators, and rates of predation and parasitism services by ranking general linear mixed models. I found that only D. undecimpunctata experienced a significant amount of egg predation, which was positively correlated to the percentage of field crops within a 1500 m radius of pumpkin fields. The parasitism of A. vittatum and the visitation frequency of A. mellifera was diluted in the presence of fruit and vegetable habitats within a 1500 m radius, and P. pruinosa visit frequency was diluted within a 500 m radius. Parasitism of A. vittatum was positively associated with urban habitats within a 500 m radius, and the visit frequency of P. pruinosa was positively associated with urban habitats within a 1500 m radii. Predation of A. tristis and D. undecimpunctata eggs and parasitism of A. vittatum adults were not significantly affected by the addition of annual non-native floral insectaries of sweet alyssum (Lobularia maritima) or a perennial native insectary planted adjacent to the crop. Formicidae were the largest contributor to egg predation, and also responded positively to urban habitats. Activity density of Carabidae and Orthoptera captured in pitfalls located in alyssum insectaries increased with higher percentages of mowed turfgrass habitats. In 2011 A. mellifera was more abundant in flowers than other bees, and in 2012 Bombus spp. was the most abundant. A. mellifera spent more time in flowers, and had a higher visit frequency in female flowers. In both years, Bombus spp. had a significantly higher visit frequency after 0700 hr, and both Bombus spp. and P. pruinosa spent less time in flowers after 0800 hr. Pollen loads on female flowers indicated that the majority of pollen deposited across the 6 hr window was transferred between 0600-0800 hr, which is when all three bee species foraged with equal frequency and similar visit duration, though Bombus spp. was the largest contributor. Alyssum and perennial floral insectaries did not have an effect on the foraging activity of bees. However, visits to pumpkins by A. mellifera showed that pumpkin fields close to an increased percentage of forest habitats supported higher visit frequencies to pumpkins with alyssum floral insectaries.

Committee:

Mary Gardiner (Advisor); Karen Goodell (Committee Member); Robin Taylor (Committee Member); Celeste Welty (Committee Member)

Subjects:

Agriculture; Ecology; Entomology

Keywords:

Intermediate Landscape-Complexity Hypothesis; pollination; biological control; pumpkin; Cucurbita pepo; floral strips; habitat management; annual insectary; perennial insectary; Ohio; general linear mixed modeling; glmm

Tanova, NadyaAn Inquiry into Language Use in Multilinguals’ Writing: A Study of Third-Language Learners
Doctor of Philosophy, The Ohio State University, 2012, EDU Teaching and Learning
In recent years, globalization, migration and mobility, the digital revolution, the predominance of English as the lingua franca, and the prominence of writing and written communication have reshaped the linguistic landscape in many regions worldwide, including the U.S. Hence, nowadays, to be literate in more than two languages is rather a necessity and multilingualism is rather the norm for many people around the globe. Yet, despite the growing body of knowledge in second language (L2) writing research addressing increasingly diverse writing contexts, little is known about multilingual writers; even less is understood about how they construct texts and negotiate meaning as they shift among languages. Hence, the purpose of this dissertation was to examine the nature of multilinguals’ writing with respect to language use and language-switching. The participants were second (SL) and foreign language (FL) students at a US university, who were studying a third language (L3) as an FL. They performed three writing tasks in their L2 and L3. The complexity theory approach provided the conceptual framework of the study. Data were collected using a background questionnaire, think-aloud protocols, written texts, logfiles, and interviews. Statistical and qualitative analyses indicate quantitative and qualitative differences between (a) multilinguals’ L2 and L3 writing; and (b) SL and FL third language learners’ L3 writing. These distinctions are regarding the amount of L1, L2, L3 use, and L-S frequency and direction. Furthermore, the results point to quantitative and qualitative differences between bilinguals’ and multilinguals’ L2 writing. In addition, it was found that L2 proficiency and L3 development did not seem to have influenced L-S frequency in L3 writing. Moreover, the study identified conditions that seemed to favor monolingual and mixed utterances in multilinguals’ composing. Thus, it revealed qualitative differences between multilingual as opposed to bilingual writers that are further confirmed by a finding pointing to the distinct roles of L1 and L2 in multilinguals’ L3 writing. However, although group averages pointed to the above trends, intra-group and intra-individual analyses from a complexity theory perspective revealed salient individual patterns. The present study thus generated a model of multilingual writing which conceptualizes it as a complex, dynamic, open, non-linear, and adaptive system. This model made it possible to focus not on single variables and linear cause-effect relationships, but instead to discern relationships among all the components of the system. Consequently, the model was used to depict each writer’s dynamic configurations in order to capture his/her idiosyncratic patterns of language use and the mechanisms related to how changes in interactions of the parts generated emergence of new writing patterns. Hence, the findings imply that multilingual writers’ languages are dynamically interconnected parts of their writing system. Thus, their L2 and L3 writing are not isolated entities and cannot be understood completely if examined separately. Therefore, L2 writing theory, research, and instruction will not be accurate and inclusive if they do not take into consideration the context of multilingual writers, their writing, and the phenomenon of switching among languages, which permeates the whole process of L2/L3 writing.

Committee:

Alan Hirvela (Committee Chair); Leslie Moore (Committee Member); Wynne Wong (Committee Member)

Subjects:

Adult Education; Composition; Education; Educational Theory; Foreign Language; Language; Linguistics; Multilingual Education

Keywords:

second language writing; third language writing; multilingual writing; complexity theory; SLA; third language acquisition; multilingualism;

Morris, Hannah RuthPaleoethnobotanical Investigations at Fort Center (8GL13), Florida
Master of Arts, The Ohio State University, 2012, Anthropology
Archaeologists have long been interested in the emergence and development of social complexity. Traditional progressive theories of cultural evolution link socio-political complexity with agriculture. Recent research on groups called complex hunter-gatherers provides support for the idea that agriculture is not necessary for social complexity. This topic is addressed by examining plant use at Fort Center, an archaeological site in Southwestern Florida. Fort Center was first occupied around cal. 750 B.C., and earlier researchers proposed that the prehistoric inhabitants of the site cultivated maize (see Sears 1982). This thesis addresses the use of plants, including maize, at the site. The results of the macrobotanical analysis of samples from 2010 excavations do not support earlier claims that maize was cultivated during the prehistoric occupation of Fort Center. These results have implications for the way we view complex hunter-gatherers in North America.

Committee:

Kristen Gremillion (Advisor); Victor Thompson (Advisor); Julie Field (Committee Member)

Subjects:

Archaeology

Keywords:

Florida; Calusa; Fort Center; socio-political complexity; agriculture; complex hunter-gatherers; paleoethnobotany; maize

Ji, BoDesign of Efficient Resource Allocation Algorithms for Wireless Networks: High Throughput, Small Delay, and Low Complexity
Doctor of Philosophy, The Ohio State University, 2012, Electrical and Computer Engineering

Designing efficient resource allocation mechanisms is both a vital and challenging problem in wireless networks. In this thesis, we focus on developing resource allocation and control algorithms for wireless networks that are aimed towards jointly optimizing over three critical dimensions of network performance: throughput, delay, and complexity.

We first focus on multihop wireless networks under general interference constraints, and aim to designing efficient scheduling algorithms that jointly optimize the network performance over different dimensions among the aforementioned three dimensions. We develop frameworks that enable us to design throughput-optimal scheduling algorithms that can reduce delays and/or incur a lower complexity in the following sense: smaller amount of required information, simpler data structure, and lower communication overhead.

We then turn to a simpler setting of single-hop multi-channel systems. A practically important example of such multi-channel systems is the downlink of a single cell in 4G OFDM-based cellular networks (e.g., LTE and WiMax). Our goal is to design efficient scheduling algorithms that achieve provably high performance in terms of both throughput and delay, at a low computational complexity. To that end, we first develop new easy-to-verify sufficient conditions for rate-function delay optimality in the many-channel many-user asymptotic regime (i.e., maximizing the decay-rate of the probability that the largest packet waiting time in the system exceeds a certain fixed threshold, as system size becomes large), and for throughput optimality in non-asymptotic settings. These sufficient conditions have been designed such that an intelligent combination of algorithms that satisfy both of the sufficient conditions allows us to develop low-complexity hybrid algorithms that are both throughput-optimal and rate-function delay-optimal. Further, we propose simpler greedy policies that are throughput-optimal and rate-function near-optimal, at an even lower complexity.

Finally, we investigate the scheduling problem in multihop wireless networks with flow-level dynamics. We explore potential inefficiency and instability of the celebrated back-pressure algorithms in the presence of flow-level dynamics, and provide interesting examples that are useful for obtaining insights into developing a unified throughput-optimal solution.

Our results in this thesis shed light on how to design resource allocation and control algorithms that can simultaneously attain both high throughput and small delay in practical systems with low-complexity operations. On the other hand, our studies also reveal that when flow-level dynamics is taken into account, even optimizing a single metric of throughput becomes much more challenging, not to mention achieving high network performance over all the three dimensions.

Committee:

Ness Shroff (Advisor); Ness Shroff (Committee Chair); Atilla Eryilmaz (Committee Member); Can Koksal (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Scheduling; Wireless Networks; High Throughput; Small Delay; Low Complexity; Fluid Limits; Large-Deviations Theory

Shah, Mihir PEvaluating Depositional Complexity and Compartmentalization of the Rose Run Sandstone (Upper Cambrian) in Eastern Ohio
Master of Science (MS), Bowling Green State University, 2013, Geology
The Upper Cambrian Rose Run Sandstone in eastern Ohio includes mixed siliciclastic and carbonate lithofacies, deposited in a shallow marine, tidally- influenced, environment. A study of 17 wells including 4 cores with total thickness of 21 m, from Holmes County (well# 2892), Coshocton County (wells# 2989 and # 3385), and Morgan County (well# 2923) reveals 14 siliciclastic and 5 carbonate lithofacies. Intertidal deposits include heterolithic wavy bedded sandstone and mudstone (lithofacies SMw), heterolithic lenticular bedded sandstone and mudstone (lithofacies SMk), heterolithic flaser bedded sandstone and mudstone (lithofacies SMf), interbedded planar laminated sandstone and mudstone (lithofacies SMl), interpreted as tidalites, massive mottled sandstone (lithofacies Smm), and massive mudstone (lithofacies Mm). Subtidal clastic deposits include medium-scale planar tabular cross-bedded sandstone (lithofacies Sp), herringbone cross-bedded sandstone (lithofacies Sx), massive sandstone (lithofacies Sm), glauconite-rich massive sandstones (lithofacies Smg), hummocky stratified sandstone (lithofacies Sh), and laminated mudstone (lithofacies Ml). Interbedded carbonates include dolo-mudstones (lithofacies Cm), bioturbated and mottled dolo-mudstones (lithofacies Cmm), dolo-packstones with rip-up clasts ("flat-pebble conglomerate") (lithofacies Cpmr), dolo-packstones with mud drapes ("cryptalgal lamination") (lithofacies Cpl), and convoluted bedded dolo-mudstone (lithofacies Cmmc). The Rose Run Sandstone in this region is interpreted to be deposited in a shallow marine environment of normal salinity with extensive tidal flats, mixed siliciclastic-carbonate deposition, strong tidal influence, and reworking of carbonate materials. This study reveals compartmentalization of Rose Run Sandstone at different scales. The reservoir quality is mainly controlled by amount of dolomite cement, quartz and feldspar overgrowths, and clay content, which influences porosity and possibly permeability. Interlaminated clay/mud baffles are common small-scale features. Textural and mineralogical variability caused by different grain size may influence reservoir quality. Interbedded dolostone vary in thickness from 20 cm to around 1.5 m, act as baffles to fluid flow in all directions, and create fluid flow compartments preventing effective pore fluid interconnectivity between sandstone units. Core and geophysical log analysis from 17 wells suggests depositional complexity as one of major reasons for compartmentalization of Rose Run Sandstone in the study area. The unit is better developed in eastern part of study area from where its thickness reduces in all other directions. Also, there is better connectivity among sand bodies in north-south direction, which is interpreted to be approximately parallel or sub-parallel to the paleo-shoreline. Together, structure contour and sand-isolith maps reveals up-dip stratigraphic traps in the study area and in areas with better well control, the structure contour map shows local structural complexity in the form of isolated anticlinal features which at places are interpreted as pre-depositional highs.

Committee:

James Evans, PhD (Advisor); Charles Onasch, PhD (Committee Member); Jeffery Snyder, PhD (Committee Member)

Subjects:

Geology

Keywords:

Rose Run Sandstone; Reservoir compartmentalization; Depositional complexity; Tidal environment; Peritidal environment

Lou, ShanshanSimultaneous Media Use and Advertising: The Effects of Salient Web Ads in a New Media World
Doctor of Philosophy (PhD), Ohio University, 2013, Mass Communication (Communication)
The current study represents one of the first attempts to use experimental design to explore individuals' processing of both web ads and television ads in simultaneous media environment. Particularly, based on the theoretical propositions of Elaboration Likelihood Model (ELM) of persuasion, the study examines the relationship between salient web ad design, manipulated as a complex web ad in a simple background, and cognitive processing route in simultaneous media environment. The findings suggest that simple web ads lead to a better recall of brand and product than complex web ads when users are watching television at the same time. The results also imply that users do not view complex web ads more positively than simple web ads in simultaneous media environment. The current study contributes to both simultaneous media use and cognitive processing literature. The study also provides recommendations for web advertisers, suggesting them to refine ad designs, and make web ads simple in simultaneous media environment in order to generate better recall. The study may motivate more research devoting to explain how people process web and television content simultaneously and what factors contribute to attract users' attention and/or influence their attitudinal evaluations.

Committee:

Roger Cooper (Committee Chair)

Subjects:

Communication

Keywords:

Simultaneous Media Use; Advertising; Web ads; Salience; Complexity

Dinh, HiepExploring Algorithms for Branch Decompositions of Planar Graphs
Master of Science (MS), Ohio University, 2008, Computer Science (Engineering and Technology)

A branch decomposition is a type of graph decomposition closely related to the widely studied tree decompositions introduced by Robertson and Seymour. Unlike tree decompositions, optimal branch decompositions and the branch-width of planar graphs can be computed in polynomial time. The ability to construct optimal branch decompositions in polynomial time leads to efficient solutions for generally hard problems on instances restricted to planar graphs.

This thesis studies efficient algorithms for computing optimal branch decompositions for planar graphs. Our main contribution is an improved software package for graph decompositions with efficient implementations of two additional decomposition classes: carving decompositions and branch decompositions. Polynomial time solutions for Independent-Set on general graphs using path decompositions, tree decompositions, and branch decompositions with bounded width are also explored as examples of how graph decompositions can be used to solve NP-Hard problems.

Committee:

David Juedes (Advisor); David Chelberg (Committee Member); Cynthia Marling (Committee Member); Xiaoping Shen (Committee Member)

Subjects:

Computer Science

Keywords:

branch decompositions; parameterized complexity; graph decompositions

Kruglov, VictoriaGrowth of the ideal generated by a quadratic multivariate function
PhD, University of Cincinnati, 2010, Arts and Sciences: Mathematical Sciences
We find exact formulas for the growth of the ideal λAk, where λ is a quadratic element of the algebra of functions over the Galois field 𝔽q for q = 2 and q = 3. More precisely, we calculate dim(λAk), where Ak is the subspace of elements of degree less than or equal to k. The results clarify some of the assertions made in the articles of Yang, Chen, and Courtois [YC], [YCC] regarding the complexity of the XL algorithm.

Committee:

Jintai Ding, PhD (Committee Chair); Timothy Hodges, PhD (Committee Member); Dieter Schmidt, PhD (Committee Member)

Subjects:

Mathematics

Keywords:

multivariate;quadratic;XL algorithm;complexity;homology;semi-regular

PUREKAR, DHANESH MADHUKARA STUDY OF MODAL TESTING MEASUREMENT ERRORS, SENSOR PLACEMENT AND MODAL COMPLEXITY ON THE PROCESS OF FINITE ELEMENT CORRELATION
MS, University of Cincinnati, 2005, Engineering : Mechanical Engineering
This thesis describes studies of finite element (FE) model validation methods for structural dynamics. There are usually discrepancies between predictions of the structural dynamic properties based on an initial FE model and those yielded by experimental data from tests on the actual structure. In order to make predictions from the model suitable for evaluating the dynamic properties of the structure, and thus to optimize its design, the model has to be validated. Model validation consists of: data requirements, test planning, experimental testing, correlation, error location and updating. This requirement for the optimum experimental data is coupled with test planning to design an optimum modal test in terms of specifying the best suspension, excitation and response locations. The Effective Independence algorithm developed by Dr. Kammer has been implemented for the optimal placement of sensors on the test structure. The assumption that the test results represent the true dynamic behavior of the structure, however, may not be correct because of various measurement errors. The errors involved in the modal testing (mass loading, sensor misalignment, modal parameter estimation, DSP errors) are investigated and their effects on estimated frequency response functions (FRFs) and on the modal parameters extracted from the FRFs are also investigated. Also, the sensitivity of these measurement errors on the correlation of the test structure is studied by comparing the experimental data with the analytical data. The correlation phase of the modal validation process also demands calculation of real modes from complex modes of the experimental test. This last topic is particularly important for the validation of FE models and for this reason; a number of different measures of modal complexity are presented and discussed in this thesis.

Committee:

Dr. Randall Allemang (Advisor)

Subjects:

Engineering, Mechanical

Keywords:

Modal testing;finite element correlation;modal complexity;mass loading of accelerometers

Buder Shapiro, Jane RobinSelf structure and emotional functioning: The effect of self-complexity on success and failure
Doctor of Philosophy, Case Western Reserve University, 1992, Psychology
The present study measured the relationship between the structure of the self and change in self-esteem following a success or failure experience. People high or low in self-complexity were given bogus positive or negative feedback on a cognitive test designed to target one of several self-esteem domains. It was predicted that for individuals low in self-complexity the effects of feedback to one domain would spread to other self-esteem domains; this was not expected to occur for individuals high in self-complexity. It was also hypothesized that the self-referent complexity task would correlate with a self-neutral complexity task. The second hypothesis was confirmed while the first prediction was not. Several explanations are offered to account for the lack of significant results regarding the first hypothesis. Most importantly, an alternative explanation of the self-complexity measure in terms of information processing ability is proposed which might clarify these findings.

Committee:

Fred Zimring (Advisor)

Subjects:

Psychology, Clinical

Keywords:

Self structure emotional functioning self complexity

Favela, Luis HUnderstanding Cognition via Complexity Science
PhD, University of Cincinnati, 2015, Arts and Sciences: Philosophy
Mechanistic frameworks of investigation and explanation dominate the cognitive, neural, and psychological sciences. In this dissertation, I argue that mechanistic frameworks cannot, in principle, explain some kinds of cognition. In its place, I argue that complexity science has methods and theories more appropriate for investigating and explaining some cognitive phenomena. I begin with an examination of the term `cognition.’ I defend the idea that “cognition” has been a moving target of investigation in the relevant sciences. As such it is not historically true that there has been a thoroughly entrenched and agreed upon conception of “cognition.” Next, I take up mechanistic frameworks. Although `mechanism’ is an umbrella term for a set of loosely related characteristics, there are common features: linearity, localization, and component dominance. I then describe complexity science, with emphasis on its utilization of dynamical systems modeling. Next, I discuss two phenomena that typically fall under the purview of complexity science: nonlinearity and interaction dominance. A complexity science framework guided by the theory of self-organized criticality and utilizing the methods of dynamical systems modeling can surmount a number of challenges that face mechanistic frameworks when investigating some kinds of cognition. The first challenge is epistemic and concerns the inadequacy of mechanistic frameworks to facilitate the comprehensibility of massive amounts of data across various scales and areas of inquiry. I argue that complexity science is more appropriate for making big data comprehensible when investigating cognition, particularly across disciplines. I demonstrate this via an approach called nested dynamical modeling (NDM). NDM can facilitate comprehensibility of large amounts of data obtained from various scales of investigation by eliminating irrelevant degrees of freedom of that system as relates to the target of investigation. The second shortcoming concerns ontological blind spots within mechanistic frameworks. Cognitive phenomena like extended cognition often fail to meet most, if not all, of the criteria assumed by many mechanistic approaches, especially component dominance. I argue that research guided by the notion of interaction dominance allows for extended cognition to be a real, empirically supported phenomenon within complex systems frameworks. In this chapter I discuss some of my experimental work on extended cognitive systems. The search for mechanisms can be a reasonable starting position when attempting to explain natural phenomena in the life sciences. However, too strict of an adherence to theoretical and methodological commitments such as linearity, localization, and component dominance can result in intractable epistemic challenges and ontological blind spots. Complexity science has theories and methods to overcome such challenges in the investigation of cognition.

Committee:

Anthony Chemero, Ph.D. (Committee Chair); Rick Dale, Ph.D. (Committee Member); Valerie Hardcastle, Ph.D. (Committee Member); Robert Richardson, Ph.D. (Committee Member)

Subjects:

Philosophy

Keywords:

Cognition;Complexity Science;Component Dominance;Dynamical Systems Modeling;Interaction Dominance;Mechanistic Explanation

Kumar, AshwaniOptimizing Parameters for High-quality Metagenomic Assembly
Master of Science, Miami University, 2015, Microbiology
De novo assembly of metagenomic sequence data presents us with various computational challenges in assembling short DNA fragments into larger sequences. An efficient assembly process is limited by computational memory requirements and run time of assembly programs. My thesis describes a method to find the number of reads and subsequence length (k) that provide an informative and good quality assembly, reducing the need for trial and error in the assembly process. The indicators for an informative assembly are the number of contigs, the size of contigs, N50 and the number of unused reads. I used contour maps to study the correlation between independent variables (the number of reads and length of k) and dependent variables (the number of contig, the contig size, N50 and the number of unused reads). I used a generalized additive model to fit the assembly results for different dependent variables and find a model that describes the correlation between dependent and independent variables.

Committee:

Iddo Friedberg (Advisor)

Subjects:

Microbiology

Keywords:

Medium complexity metagenomes, Debruijn graph, Metagenomic coverage, Number of reads, k-mer, Contig length distribution, Contour maps, Generalized additive model

Watkins, Sharon EThinking Outside a Shifting Box: The Lived Experiences of Innovative Public High School Principals in an Era of High Stakes Accountability
Doctor of Philosophy, The Ohio State University, 2016, EDU Policy and Leadership
The accountability era ushered in by the No Child Left Behind of 2001 dramatically increased the complexity of the role of public school principals and changed the context in which principals do their work. This qualitative case study focused on the lived experiences of 10 public high school principals, five traditional and five charter school leaders, serving under the competitive policy frameworks of No Child Left Behind and federal and state innovation grant programs (e.g., RttT) during the years 2008 to 2015. Three distinct studies revealed (a) principals’ perspectives on the complexity of the role, the impact of competition, accountability and innovation policies, and how the role has expanded, (b) how they perceived and employed innovative strategies to achieve their goals, and (c) emerging themes in the role including the need to negotiate new, complex, sometimes contradictory relational and political networks, manage the term failure in regard to school success, and develop a significant practice of distributed leadership to achieve their goals. The research design employed semistructured interviews and analysis of relevant documents, government records or other publicly available resources. Principal preparation programs, future administrators, and policymakers may benefit from understanding how these leaders led through the shift into 21st century public education in the United States by enacting policies in an increasingly complex, competitive policy environment. This study addressed a gap in the literature by analyzing practitioner point of view as it examined the impact of historic federal and state accountability and innovation policy convergence on the role and context of public high school principals between the years of 2008 and 2015.

Committee:

Anika Anthony, Ph.D (Advisor)

Subjects:

Education; Education Policy; Educational Leadership

Keywords:

Principalship, NCLB, RttT, complexity, innovation, distributed leadership, accountabiiity, policy

Tuft, Samantha EExamining effects of arousal and valence across the adult lifespan in an emotional Stroop task
Doctor of Philosophy in Adult Development and Aging, Cleveland State University, 2018, College of Sciences and Health Professions
As age increases, there is evidence that people tend to pay less attention to negative information, pay more attention to positive information, or both. There are many theoretical accounts that attempt to explain this positivity bias. In the current study, I examined positivity effects across the adult lifespan by evaluating competing predictions of two theories: Socioemotional Selectivity Theory, which is based in motivation, and Dynamic Integration Theory, which is based in capacity. Computer mouse tracking was used to examine effects across levels of Valence (negative, neutral, and positive) and Arousal (low, medium, and high) in an emotional Stroop task. Participants were instructed to identify the ink color of each word, while ignoring word meaning. With increased age, participants responded faster and more efficiently to negative words relative to neutral words. Additionally, with increased age and EC (Emotional Complexity), participants’ responses were slower and more deviated for low arousing positive words relative to neutral words, consistent with SST. Furthermore, as age and EC increased, participants had faster initiation times (ITs) for low arousing negative words relative to neutral words, consistent with SST. The results contribute to a better understanding of emotional cognitive biases across the adult lifespan.

Committee:

Conor McLennan, Ph.D. (Committee Chair); Eric Allard, Ph.D. (Committee Member); Andrew Slifkin, Ph.D. (Committee Member); Jennifer Stanley, Ph.D. (Committee Member); Bryan Pesta, Ph.D. (Committee Member)

Subjects:

Psychology

Keywords:

valence; arousal; language; lifespan; emotional Stroop; aging; emotional complexity; socioemotional selectivity theory; dynamic integration theory

Amon, Mary JeanExamining Coordination and Emergence During Individual and Distributed Cognitive Tasks
PhD, University of Cincinnati, 2016, Arts and Sciences: Psychology
Distributed cognition refers to situations in which task requirements are distributed among multiple agents or, potentially, off-loaded onto the environment. The idea assumes that the cognitive system is flexibly composed of various CNS components as well as non-neural bodily and environmental components, including other agents. Important to understanding distributed cognition is a consideration of how cognitive components become coordinated, and whether multi-agent cognitive coordination yields as a single cognitive system—an emergent, interpersonal cognitive synergy. Synergies are organizations of anatomical (and, potentially, environmental) components into a single, functional unit, such that the components work together and regulate one another to promote task performance. Synergies exhibit reciprocal compensation, or the interaction of components to accomplish the desired goal even in the face of obstacles. Synergies have a number of additional features found in complex systems, or systems with numerous, nonlinearly interacting elements across multiple spatial and temporal scales. Complex systems offer tools for identifying some of the features of cognitive synergies. For example, 1/f scaling has been demonstrated in a range of cognitive tasks, supporting the notion that features common to both complex systems and synergies play a key role in cognitive functioning. 1/f scaling, or “pink noise,” can be used as an indicator of coordination or task interdependence, with “white noise” as an indicator of independence. Three experiments compared isolated and distributed cognition to determine which are appropriately characterized as cognitive systems composed of individual agents or as distributed among (and irreducible to the behaviors of) multiple agents. Each experiment tested for interdependent and emergent properties of cognitive performance during distributed temporal estimation (TE) tasks. 1/f scaling was present during solo and dyadic tasks, providing evidence for task interdependence and emergence. 1/f scaling persisted with task modulations (Experiment 2) and perturbations (Experiment 3) to performance. Similarly, reciprocal compensation within dyads was observed across studies, which is a key feature of synergies. When one dyad member produced a shorter response-time, their partner was likely to have a longer response-time, and vice versa. The trial-by-trial compensation in the dyadic condition was associated with a mean response-time statistically equivalent to solo participants, indicating dyads were relatively successful in their goal of estimating 700 ms. The fact that both solo participants and dyads did not demonstrate longer timescale coordination beyond that observed in virtual pairs suggests that long timescale information may not be necessary to perform the task successfully. Interdependence and reciprocal compensation between participants demonstrated that properties common to complex systems and synergies emerged during a distributed cognitive task. The present studies provide preliminary support for the conjecture that a singular, non-decomposable cognitive system can be distributed. While more research is needed to understand the properties of distributed cognition, the findings support the hypothesis that the cognitive system is flexible in incorporating different parts of the CNS, non-neural parts of the body, and environment to behave adaptively.

Committee:

John Holden, Ph.D. (Committee Chair); Michael Riley, Ph.D. (Committee Chair); Anthony Chemero, Ph.D. (Committee Member)

Subjects:

Cognitive Therapy

Keywords:

distributed cognition;synergy;1 f noise;complexity science;temporal estimation;interpersonal coordination

Li, ChangComplexity Analysis of Physiological Time Series with Applications to Neonatal Sleep Electroencephalogram Signals
Doctor of Philosophy, Case Western Reserve University, 2013, EECS - System and Control Engineering
This thesis investigates the complexity in physiological time series with application to neonatal sleep electroencephalography (EEG) signals. Complexity analysis is applied to two clinical data sets of neonatal sleep Electroencephalography(EEG) time series, to uncover the evolution of signal dynamics and its relationship to neurodevelopment and maturation. A review of the advantages and disadvantages of various complexity measures is provided and it is determined that nonlinear dynamic analysis is complimentary to the traditional linear methods for EEG signal processing. Surrogate data analysis is used to test the nonlinearity structure in the signal. The complexity of the neonatal sleep EEG signals were further quantified by evaluating two complexity measures i.e. Approximate Entropy(ApEn) and Sample Entropy(SaEn). The suitability of ApEn and SaEn for moderate length data and their relative robustness to noise has made them the good candidate for analyzing EEG time series data. Parameter selection is of utmost importance in the computation of complexity measures, and this was addressed in the thesis by improving the process of determining the appropriate time delay. The time delay determination process was applied to both synthetic and real data; and incorporated into the computation of ApEn and SaEn. The two clinical data sets used in this study consist of both preterm and full-term neonates. The two data sets were collected with different cohorts, sampling rate and data collection hardware. The cohorts in one data set are all healthy while cohorts in the other one are either sick and healthy. Though the vast difference between the two data sets, the following conclusions are applicable to both cases: 1) Surrogate data test performed on both data sets shows evidence of non-linear structure;. 2) It further suggests the necessity of using nonunity time delay for the calculation of ApEn and SaEn; 3) ApEn and SaEn were shown to be effective in quantifying the temporal patterns in the dynamic process of neonatal sleep EEG signal.

Committee:

Kenneth Loparo (Committee Chair); Marc Buchner (Committee Member); Vira Chankong (Committee Member); Mark Scher (Committee Member)

Subjects:

Electrical Engineering; Engineering; Information Science; Systems Science

Keywords:

complexity analysis; time series; Approximate Entropy; Sample Entropy; nonlinear dynamic analysis

Dame, Elizabeth A.Assessing the effects of predation and habitat complexity on the recovery of the long-spined sea urchin, Diadema antillarum, in Curaçao
PhD, University of Cincinnati, 2008, Arts and Sciences : Geology
Over the past several decades, decreased herbivory due to the mass mortality of a keystone grazer, the long-spined sea urchin, Diadema antillarum, has contributed significantly to the proliferation of noncoralline macroalgae on western Atlantic coral reefs, including Curaçao in the Netherlands Antilles. Nearly 25 years after the dieoff, densities of D. antillarum remain below pre-mortality levels. The recovery of D. antillarum may be slowed due to the decreasing structural complexity of reefs, as these urchins need adequate shelter to avoid predators. I tested the hypothesis that added artificial structure reduces predation and thereby increases the persistence of translocated urchins. Translocated urchins exhibited greater persistence in plots with artificial structure. The hypothesis that translocated D. antillarum exhibit differential persistence with regard to distinctive structures was also tested. Individuals exhibited greater persistence in structures that were more enclosed and better mimicked natural reef crevices. Considering that D. antillarum densities are presently higher closer to shore than on the reefs of Curaçao, I tested the hypothesis that predation pressure on D. antillarum is greater on the deeper forereef than on the shallow reef crest by surveying the densities and biomasses of predators, and the densities of D. antillarum on six reefs. Additionally, video observations on caged urchins were used to assess relative predation pressure between the two depths. Data from surveys and video experiments indicate densities of predators are not greater on the forereef when compared to the reef crest. There was no significant correlation between D. antillarum densities and predator densities, or between D. antillarum densities and predator biomass on the reef crest or the forereef. I also surveyed these six reefs to examine the relationship between D. antillarum densities, topographic complexity, and reef condition. Diadema antillarum densities were not linked with topographic complexity on the reef crest; however, a positive correlation existed between urchin densities and habitat complexity on the forereef. Habitat complexity is likely limiting recovery of this urchin in greater depths. This study demonstrates the importance of conducting restoration studies that integrate an experimental and ecological approach to gain a better understanding of factors limiting recovery of D. antillarum.

Committee:

David L. Meyer, PhD (Committee Chair); Kenneth Petren, PhD (Committee Member); Eric F. Maurer, PhD (Committee Member); George W. Uetz, PhD (Committee Member); Arnold I. Miller, PhD (Committee Member)

Subjects:

Biology; Ecology; Environmental Science; Geology

Keywords:

coral reefs; recovery; Diadema antillarum; translocation; predation; habitat complexity

Next Page