Search Results (1 - 25 of 998 Results)

Sort By  
Sort Dir
 
Results per page  

KEONG KOK, TEOCOMPARISON OF TWO AERIAL DISPERSION MODELS FOR THE PREDICTION OF CHEMICAL RELEASE ASSOCIATED WITH MARITIME ACCIDENTS NEAR COASTAL AREAS
MS, University of Cincinnati, 2002, Medicine : Environmental Health Sciences
Two computer models for predicting the aerial dispersion from chemical releases were compared. The U.S. Environmental Protection Agency’s (EPA) Aerial Location of Hazardous Atmospheres (ALOHA) model and the RITE Emergency Response Software model’s predictions were compared. A sector of Cincinnati, which most closely resembles the coastal, urban terrain of Singapore, was chosen to test the models. Two chemicals, chlorine and propane, were used to simulate accidental releases. Release data for a range of meteorological conditions were input to both models. The plume travel directions and maximum plume travel distances at Immediately Dangerous to Life and Health (IDLH) values from source, were predicted by the models. Results were compared by paired t-Test and Analysis of Variance methods. Analysis showed statistically significant difference ( p=0.001) between the predicted angles of the ALOHA and RITE models. The Analysis of Variance (ANOVA) for the RITE predicted angles showed that stability class will significantly affect the predicted angle for both chemicals. Chemical type, temperature and humidity were not significantly related to the RITE model’s predicted plume direction. Analysis also showed statistically significant difference ( p=0.001) between the ALOHA and RITE models in predicting maximum distances at IDLH values. The ANOVA for the RITE model predicted distances at IDLH values showed that both stability class and chemical type were statistically significant predictors of distances at IDLH values. Temperature and humidity were not significantly related to the RITE model’s predicted maximum plume distances.

Committee:

Dr. Scott Clark (Advisor)

Subjects:

Environmental Sciences

Keywords:

plume modeling and prediction; Gaussian Modeling; rite emergency modeling; aerial dispersion modeling; atmospheric dispersion modeling

Zinnecker, Alicia M.Modeling for Control Design of an Axisymmetric Scramjet Engine Isolator
Doctor of Philosophy, The Ohio State University, 2012, Electrical and Computer Engineering
Renewed interest in scramjet propulsion has motivated efforts to construct models of the scramjet engine flow path that capture transient flow dynamics to an extent that allows model-based control design. In particular, a model of the propagation of the shock wave due to a change in backpressure is needed for use in designing a controller for unstart prevention. A simplified modeling process considers the engine as the cascade of an inlet/isolator subsystem and a combustor subsystem, with emphasis placed on constructing a model of the (axisymmetric) isolator. Two types of models were considered in this effort: physics-based models constructed through spatial discretization of the compressible Euler equations and data-based models constructed through system identification using computational fluid dynamics (CFD) data. While physics-based models constructed from the two-dimensional Euler equations were found to be too expensive, a reduced-order model based on the one-dimensional Euler equations was found appropriate for the task at hand. Concurrently, a set of linear isolator models were constructed through application of subspace state-space identification technique to CFD data. An unstructured representation of uncertainty in this model was constructed from covariance data. Combining these results with error and residual analysis, a single linear model was chosen for consideration in control design. Based on a mixed-sensitivity approach, an unstable and a stable controller were designed for this model with the goal of anchoring the shock wave at a specified location in the presence of an input disturbance. While both controllers were successful in maintaining the shock location within 3 cm of the expected location for the nominal model, it was found that the controller with stable dynamics resulted in enhanced robustness for the closed-loop system. The model was then validated in closed-loop with the physics-based model, where the controller successfully meets the design goal despite limitations in the nonlinear model. This shows the promise of applying reduced-order model-based control design to the problem of active unstart prevention in a scramjet engine.

Committee:

Andrea Serrani (Advisor); Jeffrey Donbar (Committee Member); Kevin Passino (Committee Member); Umit Ozguner (Committee Member)

Subjects:

Aerospace Engineering; Electrical Engineering

Keywords:

scramjet engines; engine modeling; unstart; disturbance rejection control; physics-based modeling; data-based modeling

Lockshin, SamSpatial characterization of Western Interior Seaway paleoceanography using foraminifera, fuzzy sets and Dempster-Shafer theory
Master of Science (MS), Bowling Green State University, 2016, Geology
The spatial paleoceanography of the entire Western Interior Seaway (WIS) during the Cenomanian-Turonian Oceanic Anoxic Event has been reconstructed quantitatively for the first time using Geographic Information Systems. Models of foraminiferal occurrences—derived from Dempster-Shafer theory and driven by fuzzy sets of stratigraphic and spatial data—reflect water mass distributions during a brief period of rapid biotic turnover and oceanographic changes in a greenhouse world. Dempster-Shafer theory is a general framework for approximate reasoning based on combining information (evidence) to predict the probability (belief) that any phenomenon may occur. Because of the inherent imprecisions associated with paleontological data (e.g., preservational and sampling biases, missing time, reliance on expert knowledge), especially at fine-scale temporal resolutions, Dempster-Shafer theory is an appropriate technique because it factors uncertainty directly into its models. Locality data for four benthic and one planktic foraminiferal species and lithologic and geochemical data from sites distributed throughout the WIS were compiled from four ammonoid biozones of the Upper Cenomanian and Early Turonian stages. Of the 14 environmental parameters included in the dataset, percent silt, percent total carbonate, and depositional environment (essentially water depth) were associated with foraminiferal occurrences. The inductive Dempster-Shafer belief models for foraminiferal occurrences reveal the positions of northern and southern water masses consistent with the oceanographic gyre circulation pattern that dominated in the seaway during the Cenomanian- Turonian Boundary Event. The water-mixing interface in the southwestern part of the WIS was mostly restricted to the Four Corners region of the US, while the zone of overlap of northern and southern waters encompassed a much larger area along the eastern margin, where southern waters occasionally entered from the tropics. In addition to its paleospatial significance, this study introduces a rigorous, quantitative methodology with which to analyze paleontological occurrence data, assess the degree of uncertainty and prioritize regions for additional data collection.

Committee:

Margaret Yacobucci, Dr. (Advisor); Peter Gorsevski, Dr. (Committee Member); Andrew Gregory, Dr. (Committee Member)

Subjects:

Earth; Geographic Information Science; Geography; Geology; Marine Geology; Oceanography; Paleoecology; Paleontology; Statistics

Keywords:

paleontology; GIS; foraminifera; species distribution modeling; fuzzy sets; Dempster-Shafer theory; paleoceanography; Western Interior Seaway; Cenomanian; Turonian; spatial modeling; Bayesian statistics; Oceanic Anoxic Event; uncertainty modeling; ArcGIS

Dai, WenhuaLarge signal electro-thermal LDMOSFET modeling and the thermal memory effects in RF power amplifiers
Doctor of Philosophy, The Ohio State University, 2004, Electrical Engineering
In this dissertation an analytical large signal electro-thermal LDMOSFET model – Agere Electro-Thermal (AET) model – is presented. Composed by three parts: a die level device model, an equivalent circuit for the package portion and a thermal network, AET model was implemented in Agilent EESOF's RF circuit design software – Advanced Design System (ADS). The methodology developed could be applied to other high power device models development as well. The LDMOSFET's distributed and dynamic thermal responses are computed by a modified image method. Thermal memory effects are studied with the aid of the newly developed distributed and dynamic thermal models. The package model in such discrete devices are extracted from geometry estimation, and S-parameter measurements. Loadpull design technique was implemented in simulation level, and the contours of output power and efficiency agree with those obtained in loadpull measurement. The model exhibits a good accuracy in predicting P1dB, gain, PAE, IMD3 and IMD. Pros and cons between analytical model and table based model are compared by using AET and a formally developed BSpline table model – OSUFET. A distributed electro-thermal model was developed to investigate the impact of a non-uniformly distributed temperature profile on the model accuracy. A 3D image method was used to compute the device's thermal resistance matrix. The complexity of the distributed electro-thermal model was further reduced by using its symmetry. Temperature distribution is reproduced in this model and it is found to have no significant impacts on electrical performance. The image method was further extended to compute the 3D transient temperature step responses, from which multiple thermal time constants can be extracted and applied to electro-thermal models. This improved transient thermal model is found to have a strong impact on the thermal memory effects in RF power amplifiers. With the aid of several electro-thermal models with different thermal transient accuracy, thermal memory effects and electrical memory effects can be characterized separately. Thermal memory effects are found to be stronger in amplifiers where predistortion technique is present, and is most significant for envelop frequency below 1 MHz.

Committee:

Patrick Roblin (Advisor)

Keywords:

LDMOS; LDMOSFET modeling; electro-thermal modeling; large signal modeling; RF power amplifier; power amplifier design; power amplifier linearization; power amplifier predistortion; thermal memory effects; thermal image methods

Perugu, Harikishan CIntegrating Advanced Truck Models into Mobile Source PM2.5 Air Quality Modeling
PhD, University of Cincinnati, 2013, Engineering and Applied Science: Civil Engineering
The U.S. Environmental Protection Agency is concerned about fine particulate matter (also called as PM2.5 as the average particle size is less than 2.5 µm) pollution and its ill effects on public health. About 80 percent of the mobile-source PM2.5 emissions are released into the urban atmosphere through combustion of diesel fuel by trucks and are composed of road dust, smoke, and liquid droplets. To estimate the regional or local air quality impact of PM2.5 emissions and also to predict future PM2.5 concentrations, we often utilize atmospheric dispersion models. Application of such sophisticated dispersion models with finer details can provide us the comprehensive understanding of the air quality problem, including the quantitative effect of pollution sources. However, in the current practice the detailed truck specific pollution estimation is not easily possible due to unavailability of a modeling methodology with applied supporting data to predict the link-level hourly truck activity and corresponding emission inventory. In the first part of this dissertation, we have proposed a methodology for estimating the disaggregated link-level hourly truck activity based on advanced statistics in light of the AERMOD based dispersion/pollution modeling process. This new proposed truck model consists of following sub models: (a) The Spatial Regression and Optimization based Truck-demand (SROT) model is developed to predict truck travel demand matrices using the spatial regression model-output truck volumes at control locations in the study area. (b) The hourly distribution factor model to convert daily truck volumes to hourly truck volumes (c) The Highway Capacity Manual (HCM) based highway assignment model for assigning the hourly truck travel demand matrices. In the second part of dissertation, we have utilized the link-level hourly truck activity to predict the typical 24-hour and maximum 1-hr PM2.5 pollution in urban atmosphere. In this AERMOD based dispersion/pollution modeling process, the gridded hourly emission inventories are estimated based on bottom-up approach using link-level hourly truck activity and emission factors from MOVES model. The proposed framework is tested using the data for the Cincinnati urban area and for four different seasonal weekdays in the analysis year 2010. The comparison with default results has revealed that the proposed models anticipate higher PM2.5 emission contribution from the heavy duty trucks. The innovation of the current research will be reflective of the following aspects: (a) An enhanced comprehensive truck-related PM2.5 pollution modeling approach and also consistent estimation of heavy-duty trucks apportionment in urban air quality (b) More reliable estimation of spatial and temporal truck activity which takes care of peak hour congestion through application of advanced modeling techniques (c) The gridded emission inventory is better estimated as detailed truck activity and emission rates are used as part of the bottom-up approach (d) Better ground-truth prediction of PM2.5 hot-spots in the modeling area (e) A transferable methodology that can be useful in other regions in the Unites States.

Committee:

Heng Wei, Ph.D. (Committee Chair); Hazem Elzarka, Ph.D. (Committee Member); Mingming Lu, Ph.D. (Committee Member); Ala Tabiei, Ph.D. (Committee Member)

Subjects:

Transportation

Keywords:

Truck Activity Modeling;Mobile Source PM2 5 Pollution;MOVES;AERMOD;Spatial Regression Modeling;Fractional Response Modeling

Sigthorsson, David O.Control-Oriented Modeling and Output Feedback Control of Hypersonic Air-Breathing Vehicles
Doctor of Philosophy, The Ohio State University, 2008, Electrical and Computer Engineering
Hypersonic air-breathing vehicles are a promising and cost-efficient technology for launching low-earth-orbit satellites and providing rapid global-response capabilities. Modeling and control of such vehicles has been an active subject of research in recent years. A first-principle, physics-based model (FPM) of the vehicle's longitudinal dynamics has been developed at the Air Force Research Laboratory, and made available to the academic community for control systems design. This model, while suitable for simulation, is intractable for model-based control, thus requiring further control-oriented modeling. A typical control objective is to track a velocity and altitude reference while maintaining physical feasibility of the control input and the state. Two control strategies are presented in this work. The first is a linear time invariant (LTI) design based on a novel formulation of a robust servo-mechanism using singular perturbation arguments. This approach does not rely on state reconstruction but does require an analysis of a family of linearized models from the FPM. The second design relies on reduced-complexity modeling of the FPM. Intractable expressions of the forces and moment in the FPM are replaced with a curve-fit model (CFM). The CFM is expressed as a linear parameter varying (LPV) system, where the scheduling variables depend on the system output. A novel LPV regulator design methodology is developed, which explicitly addresses the case of over-actuated models (i.e., models with more inputs than performance outputs). This is a non-trivial extension of the analysis and design of output regulators for LTI systems. The LPV regulator separates the control problem into a steady-state controller and a stabilizing controller. The steady-state controller produces a non-unique approximate steady-state using receding horizon constrained optimization, while the stabilizer renders the steady-state attractive. The steady-state controller represents an approach to addressing over-actuated LPV systems, alternative to static or dynamic control allocation, or standard optimal control. The stabilizer design utilizes the LPV separation principle to decompose the problem into state feedback and LPV reduced order observer design. Both approaches are applied to the FPM in simulation and their merits and drawbacks discussed.

Committee:

Andrea Serrani, PhD (Advisor); Stephen Yurkovich, PhD (Committee Member); Kevin Passino, PhD (Committee Member)

Subjects:

Electrical Engineering; Engineering

Keywords:

Electrical Engineering; Control Design; LPV Modeling; LPV Control; MIMO systems; Hypersonic Air-Breathing Vehicles; Non-Linear Modeling; Control Oriented Modeling; System ID; Over-Actuated Systems; LPV Regulator; Robust Control; Output-Feedback Control

Quintana, James R.A.Frequency Responsive Beam Tracing
BA, Oberlin College, 2016, Computer Science
This paper presents elaborations upon the beam tracing algorithm first introduced to acoustic research in 1998. Beam tracing for audio approximates reverberation filters present in physical architectural scenes by analyzing digital models of those scenes to compose the impulse responses of those filters. The algorithm as originally presented takes into account some of the acoustic absorption properties of the materials making up the scene. However, it has previously assumed that each surface which reflects or transmits sound does so at an even distribution across the space of audible frequencies. This paper describes a method for incorporating non-flat frequency responses of surfaces, which resolves this shortcoming. A survey of human subjects confirms that after these improvements, the algorithm produces more convincing and realistic filters.

Committee:

Robert Geitz (Advisor)

Subjects:

Acoustics; Architecture; Computer Engineering; Computer Science; Music

Keywords:

virtual reality; acoustic modeling; reverb; reverb modeling; beam tracing; physical modeling; simulation; real time audio; audio; computer graphics

Marsolo, Keith AllenA workflow for the modeling and analysis of biomedical data
Doctor of Philosophy, The Ohio State University, 2007, Computer and Information Science
The use of data mining techniques for the classification of shape and structure can provide critical results when applied biomedical data. On a molecular level, an object's structure influences its function, so structure-based classification can lead to a notion of functional similarity. On a more macro scale, anatomical features can define the pathology of a disease, while changes in those features over time can illustrate its progression. Thus, structural analysis can play a vital role in clinical diagnosis. When examining the problem of structural or shape classification, one would like to develop a solution that satisfies a specific task, yet is general enough to be applied elsewhere. In this work, we propose a workflow that can be used to model and analyze biomedical data, both static and time-varying. This workflow consists of four stages: 1) Modeling, 2) Biomedical Knowledge Discovery, 3) Incorporation of Domain Knowledge and 4) Visual Interpretation and Query-based Retrieval. For each stage we propose either new algorithms or suggest ways to apply existing techniques in a previously-unused manner. We present our work as a series of case studies and extensions. We also address a number of specific research questions. These contributions are as follows: We show that generalized modeling methods can be used to effectively represent data from several biomedical domains. We detail a multi-stage classification technique that seeks to improve performance by first partitioning data based on global, high-level details, then classifying each partition using local, fine-grained features. We create an ensemble-learning strategy that boosts performance by aggregating the results of classifiers built from models of varying spatial resolutions. This allows a user to benefit from models that provide a global, coarse-grained representation of the object as well as those that contain more fine-grained details, without suffering from the loss of information or noise effects that might arise from using only a single selection. Finally, we propose a method to model and characterize the defects and deterioration of function that can be indicative of certain diseases.

Committee:

Srinivasan Parthasarathy (Advisor)

Subjects:

Computer Science

Keywords:

Biomedical Data Modeling; Spatial Modeling; Biomedical Knowledge Discovery; Classification of Structure-based Data.; Bioinformatics; Protein Modeling; Protein Classification

Rajani, Sailesh3-D modeling and finite element analysis of the tibia
Master of Science (MS), Ohio University, 1995, Mechanical Engineering (Engineering)

Recent studies have shown that in the event of a car crash the lower leg of the human body is severely affected. This results due the design of cars in a manner which tend to protect the upper part of the human body through air bags and leave the lower limb open to injury.

This research was directed towards a study of the lower limb of the human body through the 3-D modeling and finite element analysis of the tibia. The 3-D model of the tibia and the fibula was developed by digitizing Magnetic Resonance Images (MRI's) of the lower leg at predetermined intervals with the help of a software known as OPTIMAS. The 3-D model of the tibia and the fibula (total length of 443.5 ram) was developed using Intergraph's Engineering Modeling Software (I/EMS). A finite element model of the tibia consisting of 2566 nodes and 3796 isoparametric wedge elements was developed with the help of Patran. Loading conditions applied simulated those of forces occurring at the cartilage layer during the normal gait cycle. The non-linear finite element analysis of the tibia was carried out within ABAQUS.

The results of this study indicated a maximum compressive stress of 43.96 N/mm 2occurring approximately 70mm from the distal end of the tibia. Nodal displacements of 1.5-3mm were observed at the articular cartilage in the direction of the applied load. Moreover, a deflection of 8mm of the tibia towards the medial side occurred. The results obtained in this study conformed well with those reported by previous researchers.

Committee:

Bhavin Mehta (Advisor)

Subjects:

Engineering, Mechanical

Keywords:

3-D Modeling; Lower Limb; Intergraph's Engineering Modeling Software

Van Kuiken, Debra M.The Role of Hope in Self-care in Persons with Heart Failure
PhD, University of Cincinnati, 2008, Nursing : Nursing, Doctoral Program

Background: Over 5 million Americans have Heart failure (HF) and must take medications, monitor for symptoms and make major changes in life-style. Nurses need to understand the factors that influence self-care in this population. Hope has been linked with problem-focused coping, positive health behaviors and goal achievement and may be linked with self-care. The Modeling and Role Modeling (MRM) Self-care Model guided this study on the role of hope as a self-care resource in affecting self-care behaviors.

Methods: This was two-phased cross-sectional anonymous survey. Phase I gave information on the feasibility and needed sample size. Phase II surveys were distributed to adults with heart failure from Cincinnati area heart failure and cardiology clinics. Patients with a history of heart transplant, cognitive disorders, and non-English reader/ writers were excluded. Hope was measured with the Herth Hope Index (Herth, 1991) and the Hope Scale (Snyder, 1994). Self-care was measured with the European Heart Failure Self-care behaviors Scale (Jaarsma et al., 2003). Possible covariates of control, coherence and mood were measured with the Illness Perception Questionnaire …ldquo; Revised and the Profile of Mood States …ldquo; short form.

Results: Respondents (N= 67) were white (74.6%), males (53.7%) average of 59.5 years old (SD = 12.4). Personal control and illness coherence were significant predictors of Hope and explained 23% of the variance in hope. Age and hope were significant predictors of self-care and explained 20% of the variance in self-care behaviors. Mood disturbance was strongly and negatively related to both hope and self-care, but was not a significant predictor of self-care.

Conclusions: Promoting client's sense of personal control and understanding of the disease through MRM may increase hope and self-care. Mood disturbance is inversely related to hope and self-care, although the relationships are unclear. Longitudinal studies are needed to test causality and to explore possible feedback mechanisms. The multidimensional aspects of hope were not supported, but qualitative methods may be useful in exploring this complex concept.

Committee:

Linda Baas, PhD, RN (Committee Chair); Devon Berry, PhD, RN (Committee Member); Wei Pan, PhD (Committee Member)

Subjects:

Nursing

Keywords:

hope; self-care; heart failure; Modeling and Role Modeling

Rajasekaran, NepoleanA Nonlinear Constitutive Model for High Density Polyethylene at High Temperature
MS, University of Cincinnati, 2011, Engineering and Applied Science: Mechanical Engineering
High density polyethylene (HDPE) is a common polymer material that is widely used in industrial applications. While significant amount of efforts have been devoted to understanding the constitutive behavior of HDPE, very little work has been performed to investigate the material response of HDPE at high strain rate and high temperature. The main objective of this research is to develop a constitutive model to bridge this gap by focusing on the non-linear stress-strain behavior in the high strain rate and high temperature range. A series of monotonic uniaxial compressive tests have been conducted at high temperature (100°C) and high strain rate (1/s) to characterize the HDPE behavior. Based on the experimental results, existing hyperelastic material models such as Mooney-Rivlin, Ogden, Arruda-Boyce, are assessed with the use of ABAQUS (a finite element software). Based on extensive comparisons, a new three-dimensional constitutive model for HDPE has been proposed. The constitutive equation integrates the basic mechanisms proposed by Boyce et al. [6] and Shepherd et al. [8]. The total stress is decomposed into an elastic-viscoplastic representation of the intermolecular resistance acting in parallel with a time and temperature dependent network resistance of polymer chains. Material constants involved in the model were calculated by fitting the compressive test results to the proposed constitutive equations. A constitutive solver for the proposed model has been developed. The stress-strain relation resolved from the constitutive model closely matches the corresponding ones from the experiments.

Committee:

Dong Qian, PhD (Committee Chair); Shepherd Shepherd, PhD (Committee Member); Yijun Liu, PhD (Committee Member)

Subjects:

Mechanical Engineering

Keywords:

High Density Polyethylene;Nonlinear material modeling;Polymer constitutive model;high temperature;high strain rate;Polymer material modeling

Desmond, Eric DStudies including hydrologic modeling and data analysis at the Ohio management systems evaluation area
Doctor of Philosophy, The Ohio State University, 2003, Food, Agricultural, and Biological Engineering
In the second chapter the Agricultural Drainage And Pesticide Transport (ADAPT) model was compared to the water management simulation models DRAINMOD, SWATREN, and PREFLO. SWATREN and PREFLO are one-dimensional finite-difference models that function on a daily time step. ADAPT and DRAINMOD are one-dimensional mass balance models with daily and hourly time steps, respectively. ADAPT, an extension of the computer model GLEAMS, also provides chemical transport information. All four models were tested against field data from Aurora, North Carolina. Observed water table depth data were collected during 1973 through 1977 from a water table management field experiment with three subsurface drain spacing treatments of 7.5 m, 15 m, and 30 m. Both the standard error of estimate and the average absolute deviation were computed between measured and predicted midpoint water table depths. For the five-year period ADAPT, DRAINMOD, SWATREN, and PREFLO had standard errors of estimated water table depth of 0.18, 0.19, 0.19, and 0.18 m and absolute deviations of 0.14, 0.14, 0.14, and 0.14 m, respectively. The results show good agreement between the models for this experimental site and encourage the further adoption of ADAPT to predict chemical transport. The Agricultural Drainage And Pesticide Transport (ADAPT) model was compared to the water management simulation models DRAINMOD, SWATREN, and PREFLO in chapter 3. SWATREN and PREFLO are one-dimensional finite-difference models that function on a daily time step. ADAPT and DRAINMOD are one-dimensional mass balance models with daily and hourly time steps, respectively. ADAPT, an extension of the computer model GLEAMS, also provides chemical transport information. All four models were tested against field data from Aurora, North Carolina. Observed water table depth data were collected during 1973 through 1977 from a water table management field experiment with three subsurface drain spacing treatments of 7.5 m, 15 m, and 30 m. Both the standard error of estimate and the average absolute deviation were computed between measured and predicted midpoint water table depths. For the five-year period ADAPT, DRAINMOD, SWATREN, and PREFLO had standard errors of estimated water table depth of 0.18, 0.19, 0.19, and 0.18 m and absolute deviations of 0.14, 0.14, 0.14, and 0.14 m, respectively. The results show good agreement between the models for this experimental site and encourage the further adoption of ADAPT to predict chemical transport. In the fourth chapter the Agricultural Drainage And Pesticide Transport (ADAPT) computer simulation model was modified to partition subsurface drainage flow into two partitions: a component of flow that passes strictly above the drain and a path that passes through the soil matrix below drain depth before rising back up to enter the drain. It has been shown in previous papers that this two layer model does a good job of predicting water table depth midway between the drains. Since this model has pesticide and nutrient transport components, it is important to make close approximations of water transport within the soil layers. The two layer model gave improved predictions of drainage nitrate mass prediction over the single layer model. A surface and subsurface drainage experiment on silty clay soil in northwest Ohio was used to gauge this effect. A sensitivity analysis was performed on ADAPT denitrification modeling. It was found that the two layer method required no adjustment factor and it out performed single layer versions of ADAPT in it's prediction of nitrogen effluent concentration. In 1990, the multi-agency Management Systems Evaluation Areas Program was established in the Midwest Region of the United States. As part of this study a systems approach was used to evaluate solute transport through the vadose zone into the Scioto River buried valley aquifer in southern Ohio. The Scioto River flows about 1.2 km from the plot site, which is located on a 260 ha farm in Pike County, south-central Ohio. Fluventic hapludoll and fluventic eutrochrept silt loams are the predominant soil series and overlie sands that grade into gravel at a depth of 3 to 5 m. The water table in this unconfined aquifer normally ranges from 4 to 6 m below the soil surface. Multiport wells were installed with sampling ports at average depths of 4.2, 5.7, 7.0, and 8.4 m below the ground surface. A centrally located well in each of three 10 ha plots had ports at 12.4 and 18.6 m. Three vacuum lysimeters were installed at depths of 1, 2 and 3 m in each of the plots to sample soil water in the vadose zone. Nitrate concentration measurements were collected from the wells and lysimeters approximately monthly. This chapter describes the multiport well and lysimeter monitoring systems and presents an analysis of well and lysimeter nitrate data that were obtained over a period of 6 years. Nitrate concentration data found in samples taken from the multiport wells were not found to be related to agricultural management system differences over the period of investigation. The 3 m gravel layer serves to inhibit unsaturated flow to lower layers and the aquifer.

Committee:

Andy Ward (Advisor)

Subjects:

Engineering, Agricultural

Keywords:

ADAPT; Water Management Modeling; Water Quality; Nitrates; Drainage; Agricultural Water Management; Modeling

Meador, Douglas P.Modeling Training Effects on Task Performance Using a Human Performance Taxonomy
Doctor of Philosophy (PhD), Wright State University, 2008, Engineering PhD
There is little argument that modern military systems are very complex, both in terms of the resources in time and money to develop them and the infrastructure that is required to produce trained operators. To properly execute human systems integration during the acquisition process, systems built to train operators must be developed that optimize training. Consequently, the training system community would benefit from simulation models that provide the ability to make accurate predictions of training processes, and allow the decision maker to specify an optimum combination of operator performance after training and the cost of that training. The goal of this research is the construction of a model of human learning using time to complete a task as a performance measure. While past research has explored the nature of functions to characterize human learning, this study will examine processes used to build a model that considers task performance as a function of training methods used to instruct a task, the nature of the task being taught, and the ability of the human to retain skill over a specified period of nonuse. An empirical study was performed to collect data from individuals completing tasks typically performed by sensor operators assigned to military unmanned aircraft systems. The tasks performed covered a range of activities that require varying combinations of human perceptual, cognitive and motor skills. The data were fitted to a set of models that were used to predict the performance outcome of a task similar in type to those used to build the model. Results are reported and recommendations for future research are offered.

Committee:

Raymond Hill, PhD (Committee Chair); Misty Blue, PhD (Committee Member); Gerald Chubb, PhD (Committee Member); Frank Ciarallo, PhD (Committee Member); Christopher Hale, PhD (Committee Member); Yan Liu, PhD (Committee Member)

Subjects:

Engineering

Keywords:

Human performance modeling; learning curve; performance prediction; human performance channels; modeling and simulation; skill acquisition; skill retention; skill reacquisition; training strategy; retention interval; military training; IMPRINT Pro

WOLFE, GLENN APERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESIS
PhD, University of Cincinnati, 2004, Engineering : Computer Engineering
This work focuses on the development of accurate and efficient performance parameter macro-models for use in the synthesis of analog circuits. Once constructed the mathematical models may be used as substitutes for full SPICE simulation, providing efficient computation of performance parameter estimates. In this thesis, we explore various modeling architectures, develop and apply two unique sampling methodologies for adaptively improving model quality, and attempt to apply the sizing rules methodology in order to perform dimensional reduction and ensure proper operation of analog circuits. In order to properly create an analog performance model, a training data set is needed to create the model, and an independent validation data set is needed to verify the accuracy of the model. The training and validation data sets are comprised of discretely sampled points in the design space. Various methods exist for generating these sample points. A static sampler does not take into account the shape of the function under scrutiny, whereas an adaptive sampler strives to reduce modeling error through strategic placement of costly sample points. Two unique adaptive sampling methodologies are developed and are applied to various analog circuit performance metrics. It is shown experimentally that both adaptive samplers are capable of improving maximum modeling errors for various performance metrics and analog topologies. Strategic placement of costly sample points improves model quality while reducing the time needed to create the performance models. Adaptive sampling also alleviates human intervention during model construction, realizing an automatic framework for sampling and modeling performance parameters. The sizing rules method and feasibility region modeling are analyzed and applied to analog performance macro-modeling in an attempt to automatically reduce the dimensionality of the design space, simplify performance parameter behavior, and ensure proper DC biasing. A feasibility region is a portion of the design space satisfying design space and electrical space inequality constraints generated by the sizing rules method. Experimental evidence indicates that the sizing rules method alone does not sufficiently constrain a circuit to facilitate the creation of accurate analog performance macro-models. Additional, manually derived design constraints are required to enable the development of accurate performance parameter models.

Committee:

Dr. Ranga Vemuri (Advisor)

Keywords:

analog synthesis; analog sizing; analog performance macro-modeling; neural network; pseudo-cubic spline; feasibility regions; feasible region modeling; adaptive sampling

Singh, ManjeetA Comparison of Rule Extraction Techniques with Emphasis on Heuristics for Imbalanced Datasets
Master of Science (MS), Ohio University, 2010, Industrial and Systems Engineering (Engineering and Technology)
Ecological datasets are not analyzed accurately due to the presence of imbalance and outliers. Imbalance being the major cause for unacceptable modeling results obtained in many cases. This research provides a solution with acceptable modeling accuracy values. SMOTE (in a modified form) will be used to preprocess the data. Different techniques such as traditional techniques (Regression and Stepwise Regression), Artificial Neural Networks and TREPAN will be used in conjunction with SMOTE to find out the combination which gives the best results. 3D Surfaces will be generated to decipher the impact of different inputs on the output and the interaction between inputs at a set output value.

Committee:

Gary Weckman, PhD (Advisor); Namkyu Park, PhD (Committee Member); Tao Yuan, PhD (Committee Member); David Millie, PhD (Committee Member)

Subjects:

Industrial Engineering

Keywords:

Ecological Datasets; Imbalanced dataset modeling; Artficial Neural Networks; Surface Generation; Non-linear modeling

Polasik, Alison KThe Role of Microstructure on High Cycle Fatigue Lifetime Variability in Ti-6Al-4V
Doctor of Philosophy, The Ohio State University, 2014, Materials Science and Engineering
The microstructural sources of fatigue lifetime variability were investigated for four different microstructural variations of Ti-6Al-4V. Specimens were tested at lower stresses to investigate the behavior in the HCF (high cycle fatigue) regime, which is characterized by lifetimes near or in excess of 10^6 cycles. Fractography and replication analyses confirmed that the lifetime was dominated by crack nucleation, and thus variations in the lifetime between individual test specimens are primarily attributed to variability in the time to nucleate a dominant crack. Stereology was used to quantify key microstructural features for each tested specimen. These values were used as inputs for a series of microstructurally-based fuzzy logic neural network models. Using these models, virtual experiments were conducted to investigate the individual effect of each microstructural feature on the lifetime, an investigation that is impossible to conduct empirically because of the complex microstructure in these alloy systems. These virtual experiments demonstrated that colony size and alath thickness have the greatest effect on HCF lifetime of ß-processed Ti-6Al-4V alloys, and that colony size is more important that a lath thickness. For the a/ß – processed microstructures, the volume fraction of primary a and the a lath thickness were shown to affect the lifetime, while the ap grain size was not. Defect analyses of failed specimens indicated that damage accumulation is often localized during cyclic loading, with dislocation densities varying from one a lath to another. For all specimens, a-type dislocations are seen and c+a - type dislocations were observed only in regions of localized plastic strain. Investigation of site-specific TEM foils extracted from the crack nucleation region of a/ß – processed specimens provided information about the nature and behavior of dislocations during the crack nucleation event. A comparison of short- and long- life specimens provides information about differences in the evolution of the dislocation structure prior to crack nucleation. The potential of this combinatorial approach for future fatigue lifetime investigations is discussed. In particular, the project demonstrates that such an approach could be useful in developing a quantitative understanding of the role variations in microstructural features have on variations in HCF lifetime.

Committee:

Hamish Fraser, PhD (Advisor); Michael Mills, PhD (Committee Member); Stephen Niezgoda, PhD (Committee Member)

Subjects:

Aerospace Materials; Engineering; Materials Science

Keywords:

Fatigue; Titanium; Fuzzy Logic Modeling; Ti-6-4; stereology; microstructure modeling

Walters, Lauren NA Study of the Microphase Separation of Bottlebrush Copolymers
Master of Sciences, Case Western Reserve University, 2017, Macromolecular Science and Engineering
The microphase separation behavior of Janus bottlebrush copolymers and Janus bottlebrush copolymers with solvent swelling were evaluated theoretically with dissipative particle dynamics (DPD) modeling. The programs systematically tested conditions to determine trends in behavior. Changes in the weight fractions of the A and B polymer components found that lamellae or bicontinuous phases were dominant for most conditions. Other parameters studied within the bottlebrush copolymer system included backbone size, brush length, and thermodynamic favorability of A-B interactions in the system. Solvent swelling of the B monomer type was observed to increase the curvature of the A polymer domains. For each condition, the structure factor aided in phase identification. The underlying causes of behavior are inferred and discussed. Phase diagrams were created for the bottlebrush copolymer systems for future prediction of the separation behavior. Results of this study touch upon the fields of energy storage, high precision patterning, and electronics.

Committee:

Michael Hore (Advisor); Hatsuo Ishida (Committee Member); Lei Zhu (Committee Member)

Subjects:

Chemical Engineering; Engineering; Materials Science; Polymers

Keywords:

bottlebrush; copolymer; dissipative particle dynamics; microphase separation; polymer simulation; polymer modeling; materials modeling; solvent swelling

Kalolia, Maulik RajendrabhaiModeling, simulation and experimental verification of an electric bicycle with regenerative braking
Master of Science in Mechanical Engineering, Cleveland State University, 2012, Fenn College of Engineering
Electric bicycles are widely available in user markets. However their use as a daily commuting vehicle is limited due to the need for frequent recharging. This thesis focuses on the mathematical modeling of electrical bicycle with regenerative breaking. Basic bond graphs methods are discussed here to develop state space models for mechatronic systems. A bond graph based mathematical model of an electric bicycle with regeneration is developed in this thesis. Mathematical models are tested in simulation, generating different road scenarios. Parameters required for the simulation are calculated using an experimental setup. The thesis shows the capability of bond graphs to assist in calculations for regenerative charging. The main focus of this thesis is to evaluate simulation models against a prototype. Simulation results and road testing of the prototype indicate the regenerative braking is not only feasible, but an advantage to implement in an electric bicycle. It is shown that the distance between battery recharge is improved by as much as 10% depending on riding conditions.

Committee:

Hanz Richter (Committee Chair); Daniel Simon (Committee Member); Ana Stankovic (Committee Member)

Subjects:

Mechanical Engineering

Keywords:

mechatronics modeling; electric bicycle; regenerative braking; bond graph modeling; pid controller

Fang, Yu-HuaQuantification of Pharmacokinetics in Small Animals with Molecular Imaging and Compartment Modeling Analysis
Doctor of Philosophy, Case Western Reserve University, 2009, Biomedical Engineering
In the recent years molecular imaging has become an important tool in biomedical research. To quantify physiology from image data, compartment modeling has been shown to be useful by analyzing the pharmacokinetics from molecular images. However, some challenges still exist and limit the application of compartment modeling in a routine basis. Methods to resolve some of the existing challenges are proposed and validated in this thesis. First, non-invasive methods are developed to measure the input functions required in compartment modeling and parameter estimation for positron-emission tomography (PET) and dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) studies. Methods for image-derived input functions are developed and validated against the reference input functions. Second, a software environment is established to integrate functions that handle image analysis and modeling analysis based on COmpartment Model Kinetic Analysis Tool (COMKAT). Methods to enhance speed and interface for COMKAT have been implemented as described in this thesis. With the methods and software developed in this thesis, researchers can quantify in vivo pharmacokinetics with molecular imaging methods to measure the physiology and metabolism non-invasively in a routine basis.

Committee:

Raymond Muzic, PhD (Advisor); Xin Yu, PhD (Committee Chair); Gerald Saidel, PhD (Committee Member); Peter Faulhaber, MD (Committee Member)

Subjects:

Biomedical Research; Engineering

Keywords:

Kinetic modeling; compartment modeling; molecular imaging; PET; MRI; quantification

Barrows, Sean ThomasTURBO Turbulence Model Validation with Recommendations to Tip-Gap Modeling
Master of Science, The Ohio State University, 2008, Aeronautical and Astronautical Engineering

Two new turbulence models have been implemented in the turbomachinery flow simulation code TURBO. This paper focuses on the validation and implementation of the shear stress transportation (SST) model and the detached eddy simulation (DES) model. The models are validated against experimental data as well as results from the current two-equation, low Reynolds number, K-E model. Validation is conducted on a circular cylinder and the NASA transonic compressor rotor, Rotor 35. Cp and St predictions are examined for the cylinder while operating range and performance figures are examined for Rotor 35.

Upon validation, the models are examined for robust performance with regards to the tip-gap modeling of Rotor 35. Currently TURBO standard grids utilize a periodic loss-less tip region. Grid spacing near this region is explored by introducing clustering at the blade tip. A vena-contracta approach and griding of the tip-gap region are also explored.

Committee:

Jen-Ping Chen, PhD (Advisor); Meyer Benzakein, PhD (Committee Member)

Subjects:

Engineering; Fluid Dynamics

Keywords:

TURBO; CFD; tip-gap modeling; computational fluid dynamics; turbulence modeling

Sunderland, Eric J.Building Information Modeling and the Parametric Boundary of Design
MARCH, University of Cincinnati, 2010, Design, Architecture, Art and Planning : Architecture (Master of)
Technology has transformed the role of the architect and the profession over an arguably short period of time. With the invention of the printing press, the method of architectural representation has shifted from an entirely drafted form to an entirely digital form. Issues of efficiency, control, communication, and value contribute significantly to this transition. Value, as measured through the precision in the built form, has pushed architectural representation to clarify obscure and abstract intent, pushing for what some might label a transparency in design. The transition from hand-drawn to digitally drawn representation has additionally introduced new methods for approaching architectural problem solving, most notably in design forecasting, integrated practice, and multiple-solution design outcomes. Building Information Modeling and Parametric Modeling are current tools that make use of these new design methodologies. Beyond simple computer-aided-design representations (CAD), these tools combine functionality into a single three-dimensional digital model that enable the architect to run quantitative analyses during the design process. The result is a better-informed inquiry to arrive at substantially informed design outcomes. It is the goal of this thesis to evaluate the tools of representation against the issues of efficiency, control, communication, and value from architectural design to the built form. Moreover, it will be important to recognize the use implications of these tools, such as application in the academic realm, ethics of building science in architectural design, and perhaps most importantly the relationship between quantitative and qualitative input in design.

Committee:

Patricia Kucker, MARCH (Committee Chair); George Bible, MCiv.Eng (Committee Chair)

Subjects:

Architecture

Keywords:

Building Information Modeling;Parametric Modeling;Design Representation;Contemporary Design Tools

Genc, Kerim O.The Effects of Altered Gravity Environments on the Mechanobiology of Bone: From Bedrest to Spaceflight
Doctor of Philosophy, Case Western Reserve University, 2011, Biomedical Engineering

Despite the use of exercise countermeasures, bone mineral density (BMD) and bone strength changes which have been shown to occur at a rate of ~ ‐1 to ‐3% per month are a potentially serious medical scenario that may lead to increased fracture risk. These decrements in bone are likely due to the decrease in mechanical loading experienced by the musculoskeletal system while living on‐orbit.

The primary objective of this dissertation is to shed light on the effects of altered gravity environments on the mechanobiology of bone. This objective is explored through surrogate measures using the bedrest model of bone loss, direct measures onboard the International Space Station (ISS) and theoretical calculations using Finite Element (FE) and musculoskeletal modeling techniques.

Through the enhancement of previous algorithms relating daily mechanical loading to bone homeostasis, we have developed the Enhanced Daily Load stimulus (EDLS) as a method of prescribing exercise in a “dose” based manner during bedrest. We were able, on average, to prevent bone loss in exercise subjects. To expand the 17 examination of the efficacy of exercise countermeasures beyond the limitations of BMD measures, we developed subject specific, voxel‐ and Computed Tomography (CT)‐based Finite Element (FE) models of the proximal femur. With these models, we were able to account for the 3D geometry of the bone and calculated bone strength and show that, on average, exercise subjects had lower decrements in bone strength than control subjects.

The FE models used to examine strength changes during space flight use boundary conditions that are in the context of Earth gravity (1g), thus these models are likely not relevant when examining fracture risk in crewmembers living on other planets or in reduced gravity. Therefore, we combined FE and musculoskeletal modeling to develop a preliminary modeling framework that would allow the examination of tissue level stresses that may occur in the femur during a more operationally relevant movement that may occur in these altered gravity environments.

In chapter 5, we examined the loading that actually occurs during exercise on board the ISS. We used in‐shoe force sensors to measure the envelope of lower extremity mechanical loads that the available exercise devices could generate. We were able to provide a benchmark that will enable future researchers to judge whether or not new generations of exercise countermeasures are superior to those used at the time the data for the present experiments were collected.

Committee:

Peter Cavanagh, PhD DSc (Advisor); Patrick Crago, PhD (Advisor); Christopher Hernandez, PhD (Committee Member); Robert Kirsch, PhD (Committee Member); Antonie vna den Bogert, PhD (Committee Member)

Subjects:

Biomechanics; Biomedical Engineering; Kinesiology

Keywords:

Bone Loss; Space Flight; Exercise Countermeasures; Bedrest; Finite Element Modeling; Musculoskeletal Modeling; Daily Mechanical Loading

Burchnall, MarkAn Experimental Study of the Rate Dependencies of a Nonwoven Paper Substrate in Tension using Constitutive Relations
MS, University of Cincinnati, 2012, Engineering and Applied Science: Mechanical Engineering
Paper is an everyday product used for various reasons by the consumer. This study focuses on a low basis weight nonwoven paper substrate used as toilet paper. Paper manufacturers are always trying to increase line speed to push out more paper at the same cost and the manufacturing process this research focuses on is the embossing process. This study will focus on determining the rate dependent in-plane constitutive relations that define the mechanical behavior of the paper substrate in tension. Once the constitutive models are created they can be imported into a finite-element software package and used to study changes made to the embossing process. Experimental tests were run in order to isolate specific properties of the material. Uniaxial tension tests were run at 0.1 1/s, 4.5 1/s and 45 1/s total engineering strain rates in order to determine the rate dependent effects on the material. Stress relaxation tests were run at varying moisture contents and temperatures with the idea of determining the viscoelastic model and how moisture and temperature affect the material. Viscoelastic and viscoplastic behavior models were developed to characterize the rate dependencies in the elastic and plastic regions of the material. A generalized Maxwell model is used to model the viscoelastic region and a modified form of G’sell’s flow stress law was used in conjunction with the viscous based overstress theory (VBO) to define the viscoplastic region. The research done on this paper substrate details a method to define the rate dependent constitutive properties of any fiber network material through an experimental study.

Committee:

Yijun Liu, PhD (Committee Chair); Richard W. Hamm, MS (Committee Member); Kumar Vemaganti, PhD (Committee Member)

Subjects:

Engineering

Keywords:

constitutive modeling;nonwoven paper;experimental research;viscoelasticity;viscoplasticity;material modeling;

Pilla, SrikanthIntegration of Micromechanical and Probabilistic Analysis Models of Nanocomposites
Master of Science in Mechanical Engineering, University of Toledo, 2005, Mechanical Engineering
Carbon nanofiber/nanotube reinforced composites show great potential as a replacement for conventional composite materials because of their exceptional properties. Experimental results demonstrate that substantial improvements in the mechanical behavior of a nano-structured composite can be attained using small amounts of carbon nanofibers/nanotubes as reinforcing phase. To date many researchers have theoretically predicted the effective behavior of conventional composites and nanocomposites using traditional methods (for example using Mori-Tanaka or Halpin-Tsai models). The effect of the interface between the nanofibers/nanotubes and the matrix has also been investigated. There is uncertainty in the value of the modulus of the reinforcement in nanocomposites because it is difficult to measure the modulus. Moreover there is variability in the matrix and interface moduli. Therefore, it is important to study the effect of uncertainty and variability in the properties of the phase materials on the properties of nanocomposite. A large amount of work has been done on modeling uncertainty and variability in conventional materials (e.g., aluminum, steel or long fiber composites) and on predicting the probability distribution of the performance characteristics of structures made of these materials. However, an integrated tool is needed for probabilistic analysis of structures made of carbon nanofiber/nanotube composites. In this thesis, existing models for stiffness analysis of conventional composites and nanocomposites have been modified and integrated with tools for deterministic and probabilistic analysis of structures. A two-step model has been developed for determining deterministically the stiffness of nanocomposite materials considering the effect of the interface between the reinforcement and the matrix. A methodology consisting of the above two-step model, deterministic analysis of plates and probabilistic analysis of structures has also been developed and demonstrated. It is shown that it is important to consider the interface between the reinforcement and polymer matrix and the variability and uncertainty in the properties of the phase materials of a nanocomposite.

Committee:

Efstratios Nikolaidis (Advisor)

Subjects:

Engineering, Mechanical

Keywords:

nanocomposites; micromechanical modeling; probabilistic analysis; reliability analysis; interface modeling

PHIPPS, CASSANDRA JMigrating an Operational Database Schema to Data Warehouse Schemas
MS, University of Cincinnati, 2002, Engineering : Computer Engineering
The popularity of data warehouses for analysis of data has grown tremendously, but much of the creation of data warehouses is currently done manually. Although the initial design process is labor-intensive and expensive, research towards automating data warehouse creation has been limited. We propose and illustrate algorithms for automatic schema development. Our first algorithm uses a conceptual enterprise schema of an operational database as a starting point for source-driven data warehouse schema design. Candidate conceptual data warehouse schemas are created in ME/R model form. We extend the ME/R modeling notation to note where additional user input can be used to further refine a schema. Our second algorithm follows a user-driven requirements approach that utilizes queries to guide selection of candidate schemas most likely to meet user needs. We propose a guideline of manual steps to refine the conceptual schemas to suit additional user needs, for example, the level of detail needed for date fields. The selected and possibly refined schemas are now ready to be transformed into logical schemas. The third algorithm creates logical schemas in Star model notation from the conceptual schemas in ME/R notation. The logical model provides a basis for physical modeling and the data warehouse implementation. Our algorithms provide a foundation for an automated software tool to create and evaluate data warehouse schemas. The algorithms are illustrated using the TPC-H Benchmark schema and queries. The popularity of data warehouses for analysis of data has grown tremendously, but much of the creation of data warehouses is currently done manually. Although the initial design process is labor-intensive and expensive, research towards automating data warehouse creation has been limited. We propose and illustrate algorithms for automatic schema development. Our first algorithm uses a conceptual enterprise schema of an operational database as a starting point for source-driven data warehouse schema design. Candidate conceptual data warehouse schemas are created in ME/R model form. We extend the ME/R modeling notation to note where additional user input can be used to further refine a schema. Our second algorithm follows a user-driven requirements approach that utilizes queries to guide selection of candidate schemas most likely to meet user needs. We propose a guideline of manual steps to refine the conceptual schemas to suit additional user needs, for example, the level of detail needed for date fields. The selected and possibly refined schemas are now ready to be transformed into logical schemas. The third algorithm creates logical schemas in Star model notation from the conceptual schemas in ME/R notation. The logical model provides a basis for physical modeling and the data warehouse implementation. Our algorithms provide a foundation for an automated software tool to create and evaluate data warehouse schemas. The algorithms are illustrated using the TPC-H Benchmark schema and queries.

Committee:

Dr. Karen Davis (Advisor)

Subjects:

Computer Science

Keywords:

data warehouse schema creation; conceptual modeling; logical modeling

Next Page