Search Results (1 - 25 of 40 Results)

Sort By  
Sort Dir
 
Results per page  

Preston, DouglasLast Two Surface Range Detector for Direct Detection Multisurface Flash Lidar in 90nm CMOS Technology
Master of Science in Electrical Engineering (MSEE), Wright State University, 2017, Electrical Engineering
This thesis explores a novel detection architecture for use in a Direct-Detect Flash LIDAR system. The proposed architecture implements detection of the last two surfaces within single pixels of a target scene. The novel, focal plane integrated detector design allows for detection of objects behind sparse and/or partially reflective covering such as forest canopy. The proposed detector would be duplicated and manufactured on-chip behind each avalanche photodiode within a focal plane array. Analog outputs are used to minimize interference from digital components on the analog input signal. The proposed architecture is a low-footprint solution which requires low computational post-processing. Additionally, constant fraction discrimination is used to mitigate range walk. The proposed architecture is designed in 90nm CMOS technology. The footprint is 170.1 µm² with the largest transistor dimension being 22 µm. The design is easily expandable in hardware to allow additional surfaces to be detected.

Committee:

Saiyu Ren, Ph.D. (Advisor); Arnab Shaw, Ph.D. (Advisor); Ray Siferd, Ph.D. (Committee Member); Robert Muse (Other)

Subjects:

Electrical Engineering

Keywords:

Lidar; Flash Lidar; Multisurface; Time of Flight Camera; ToF Camera; Direct Detection; 90nm CMOS; CMOS; VLSI; Analog; Analogue; Constant Fraction Discriminator; CFD; Feedback Shift Register; Time to Digital Converter; TDC; Low Footprint; Footprint

Rosenblatt, Jacob A.Cinematic Style: The Effects of Technology
Bachelor of Fine Arts (BFA), Ohio University, 2010, Film
This thesis chronicles the history and evolution of American mainstream cinema technology as it pertains to camera, lenses, lighting, camera support, and film stocks.

Committee:

Steven Ross (Advisor); Jeanette Buck (Committee Member)

Subjects:

Fine Arts

Keywords:

cinematography;film technology;cinema technology;movie history;camera;lenses;film lighting;camera support;film stock;

Ballard, Brett S.Feature Based Image Mosaicing using Regions of Interest for Wide Area Surveillance Camera Arrays with Known Camera Ordering
Master of Science (M.S.), University of Dayton, 2011, Electrical Engineering
Today, modern surveillance systems utilizing camera arrays can capture several square miles of ground activity at high resolution from a single aircraft. A camera array uses multiple cameras to capture images synchronously with partial overlap between cameras' fields of view. This allows a wide area to be monitored continuously in real time by image analysts or processed for information such as object identification and location tracking. The task of combining these images from each individual camera into one large image containing all of the images' views of the scene activity is commonly called image mosaicing in the field of computer vision. Though the process of image mosaicing is not new, what makes image mosaicing a topic of current research is the difficulty and variety of both problems and solutions. The objective of this thesis is to demonstrate the most suitable system to mosaicing images captured by wide area surveillance camera arrays with known camera ordering by using regions of interest combined with a feature based approach. The proposed system utilizes algorithms for feature extraction, matching, and estimation. The key difference between the proposed mosaicing system and prior successful mosaicing systems within other application domains is the use of known camera ordering. In many previously researched mosaicing systems no assumption is made for camera order, and in fact in some applications there is no assumption that images may even be viewing the same scene at all. However, for applications involving wide area surveillance camera arrays these assumptions are perfectly valid. This allows bounded regions of interest near the appropriate image borders to be used which is demonstrated in the proposed system to increase performance in both pixel accuracy and mosaic computation times over the more generalized mosaicing approach.

Committee:

Eric Balster, PhD (Committee Chair); Vijayan Asari, PhD (Committee Member); John Loomis, PhD (Committee Member)

Subjects:

Electrical Engineering; Remote Sensing; Scientific Imaging

Keywords:

regions of interest; roi; known camera ordering; image mosaicing; feature based homography; image stitching; stereo computer vision; wide area surveillance; camera array;

McLemore, Donald RodneyLayered Sensing Using Master-Slave Cameras
Master of Science in Engineering (MSEgr), Wright State University, 2009, Electrical Engineering
The objective of this thesis is to demonstrate the layered sensing concept using Master-Slave cameras. The process of 2D camera calibration and several key factors that can present error during such calibration are described. The analysis and results are based on calibration of a pinhole model camera system. The calibration is accomplished using OpenCV software and the results are analyzed using MATLAB software. These results are divided into intrinsic and extrinsic camera parameters. These parameters are then used to determine the position and orientation of the object in the camera coordinate system. This thesis also explores the use of two cameras as a Master-Slave system to demonstrate the layered sensing concept. The Master camera’s orientation, zoom, and distance from the Slave camera is fixed. Using the position and orientation of the object in the Master camera coordinate system, the position of the object in the Slave coordinate system is found using transformation matrices. This information is then used to determine the pan/tilt angles of the Slave camera. The Slave camera is then directed to focus on the object using servo control.

Committee:

Kuldip S. Rattan, PhD (Advisor); Devert Wicker, PhD (Committee Member); Doug Petkie, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

CAMERAS; checkerboard; calibration; camera calibration; Focal Length; zoom; Slave camera

Prakash, DeepakStereoscopic 3D viewing systems using a single sensor camera
Master of Science, The Ohio State University, 2007, Electrical Engineering
The usual practice for stereoscopic imagery has been to capture a scene from two different viewpoints from two cameras and careful consideration needs to be given when setting up such a system. Several factors like the lens focal length, the field of view of the lens and the time of the video capture have to be precisely aligned or synchronized for both the cameras. Practically this is a cumbersome task and in this thesis, we investigate the use of a single video camera on a moving platform that eliminates the above mentioned issues and still generates the same depth as perceived by the viewer. This is achieved by presenting different video frames to the two eyes with a time-slip between them. Importance has been given to the comfort with which a viewer can perceive the 3D video. Out of the several factors that affect the viewer comfort like field of view, focal length and scene brightness, specific consideration has been given to the camera separation distance that plays the major role. Previous works show that the separation distance is a function of the focal lengths of the capturing and viewing cameras and the distances of the nearest and farthest points in the scene. Applying this to our single camera system, we obtain an optimum frame delay that ensures a good comfort for the viewer while perceiving the depth from the videos. Our system also enables the user to adjust the frame difference in case he feels uncomfortable with the optimum delay or if he feels he wants to see the scene in more depth.

Committee:

Yuan Zheng (Advisor)

Keywords:

Stereoscopy; Single camera; 3d viewing system

Bush, Abra K.The Liriche Da Camera Of Francesco Santoliquido
Doctor of Musical Arts, The Ohio State University, 2003, Music
Italian composer Francesco Santoliquido (b. 1883; d. 1971) wrote approximately twenty-six songs, or Liriche da camera, and numerous other compositions throughout the first three decades of the twentieth century. Santoliquido’s songs are worthy of study; they are full of lyricism and infused with elements of “Debussian,” “Straussian,” and Arabic influences. Santoliquido was born on August 6, 1883 in San Giorgio at Cremona, near Naples. He studied composition in Rome at Liceo di Santa Cecilia with Giacomo Setaccioli (b. 1868; d. 1925) and Stanislao Falchi (b. 1851; d. 1922). Four years after receiving his diploma in 1908, Santoliquido moved to Tunis, just as his career was beginning to gain momentum. He primarily resided in the village of Hammamet. In 1921, he returned to Italy, settling in Anacapri, where he remained until his death on August 26, 1971. Francesco Santoliquido composed music in a number of genres. In addition to his songs, Santoliquido’s musical output includes four operas, a mimmodramma (or ballet), several symphonies and orchestral suites, chamber music, and works for solo piano. Even with his limited popularity, Santoliquido’s music was performed during in his lifetime in cities from Zagreb to Manhattan. However, following the publication of his fascist writings in the late 1930’s, the performances of his music declined dramatically. Although the reasons for the scholarly neglect of Santoliquido’s songs are not completely known, the two most contributing factors are location and politics. Undoubtedly his choice to reside in remote locations such as Hammamet and Anacapri coupled with his highly contested political writings contributed to his musical obscurity. This study is limited to the solo vocal and piano Liriche da camera by Francesco Santoliquido, with the exception of those that have been lost. Those that are analyzed and translated include: I canti della sera, I poemi del sole, Tre poesie perisane, Il poema della morte, Antica stampa italiana, Una lirica giapponese, Petites poëmes [sic] japonaise, Supremo sonno, Mélancholie, and I due poemi arabe.

Committee:

John Rice (Advisor)

Subjects:

Music

Keywords:

Francesco Santoliquido; Liriche da camera; Art song

Gotardo, Paulo Fabiano UrnauModeling Smooth Time-Trajectories for Camera and Deformable Shape in Structure from Motion with Occlusion
Doctor of Philosophy, The Ohio State University, 2010, Electrical and Computer Engineering

This Ph.D. dissertation focuses on the computer vision problems of rigid and non-rigid structure from motion (SFM) with occlusion. The predominant approach to solve SFM is based on the factorization of an input data matrix, W, using singular value decomposition (SVD). In practical application of SFM, however, 2D scene points cannot be tracked over all images due to occlusion. Therefore, matrix W is often missing a large portion of its entries and standard matrix factorization techniques such as SVD cannot be used directly. We assume the columns of the input observation matrix W describe the trajectories of 2D points whose positions change only gradually over time. This is the case of point tracks obtained from video images provided by a single camera that moves smoothly (i.e., gradually) around the structure of interest. We then derive a family of efficient matrix factorization algorithms that estimate the column space of W using compact parameterizations in the Discrete Cosine Transform (DCT) domain. Our methods tolerate high percentages of missing data and incorporate new models for the smooth time-trajectories of 2D-points, affine and weak-perspective cameras, and 3D deformable shape.

We solve the rigid SFM problem by estimating the smooth time-trajectory of a single camera. By considering a weak-perspective camera model from the outset, we directly compute Euclidean 3D shape reconstructions without requiring post-processing steps. Our results on datasets with high percentages of missing data are positively compared to those in the literature.

In non-rigid SFM, we propose a novel 3D shape trajectory approach that solves for the deformable structure as the smooth time-trajectory of a single point in a linear shape space. A key result shows that, compared to state-of-the-art algorithms, our non-rigid SFM method can better model complex articulated deformation with higher frequency deformation components. We also offer an approach for the challenging problem of non-rigid SFM with missing data.

Committee:

Aleix Martinez, PhD (Advisor); Kevin Passino, PhD (Other); Yuan Zheng, PhD (Other)

Subjects:

Artificial Intelligence; Computer Science; Electrical Engineering; Mathematics; Motion Pictures; Robots

Keywords:

structure from motion; matrix factorization; missing data; camera trajectory; shape trajectory

McIntire, John PaulInvestigating the Relationship between Binocular Disparity, Viewer Discomfort, and Depth Task Performance on Stereoscopic 3D Displays
Doctor of Philosophy (PhD), Wright State University, 2014, Human Factors and Industrial/Organizational Psychology PhD
Stereoscopic 3D (S3D) displays offer the capability to enhance user performance on a variety of depth tasks. However, the benefit derived from viewing S3D depends in part on the magnitude of binocular disparity that is displayed. Surprisingly few studies have directly investigated the relationship between disparity and depth task performance. The studies that have been conducted suggest that a minimum amount of disparity (10-50 arc min) may be needed to improve performance over conditions in which no S3D is present, but it is unclear the extent to which performance might improve with increases in disparity beyond this range. From a human factors perspective, there are compelling reasons for using binocular disparities that are smaller than a strict geometrical interpretation of the scene would require (i.e., microstereopsis); one reason is to make the viewing experience more comfortable. This is important because S3D displays appear to cause a variety of simulator sickness-type problems for as many as 25-50% of users (including eye strain, headache, nausea, etc.). Preliminary evidence on the use of microstereopsis suggests that it does indeed result in a more comfortable and less fatiguing depth percept, particularly if binocular disparity is limited to a maximum of about 60 to 70 arc min (the One Degree Rule). But does microstereopsis also negate the performance benefits of stereopsis? How much can disparities be reduced before performance decrements are noticeable, and how comfortable are these disparities? Is there a stereo "sweet spot" in which both performance and comfort are high? And is this sweet-spot dependent on the particular depth task being tested? Results from a simple 2 degree-of-freedom (DOF) virtual precision object alignment task showed that when averaged across participants, maximum performance was achieved when disparity was limited to +/- 80 or 100 arc min of disparity during a 30 minute session. Performance with S3D cues improved alignment accuracy by up to 80% compared to no stereo cues, though several participants received an inconsistent benefit, and in a few cases, S3D resulted in detrimental performance. The tested magnitudes of disparity limits were also generally comfortable, although a significant correlation between increasing disparity and decreasing comfort was confirmed. Several optometric measures (e.g. stereoacuity, fusion ranges) predicted performance, but not comfort, on S3D displays. Results from a more complex 5 DOF virtual precision object alignment task showed that the best performance was achieved with disparity limits from +/- 60 to 100 arc min of disparity. Again, the tested magnitudes of disparity limits were generally comfortable, and several optometric measurements predicted performance but not comfort. Overall, the results suggest that the One Degree Rule for stereoscopic disparity limits can be expanded for near-viewing desktop applications. The results also suggest that while camera separations resulting in microstereopsis showed improved performance over no-stereopsis conditions, best performance is achieved with orthostereoscopic or near-orthostereoscopic levels of camera separation. The findings provide little support for Postural Instability Theory, but some support for Cue Conflict Theory, as useful guides for studying and mitigating viewer discomfort and simulator sickness symptoms on stereoscopic 3D display applications.

Committee:

Scott Watamaniuk, Ph.D. (Committee Chair); Paul Havig, Ph.D. (Committee Member); Robert Gilkey, Ph.D. (Committee Member); Kevin Bennett, Ph.D. (Committee Member)

Subjects:

Behavioral Sciences; Cognitive Psychology; Engineering; Experimental Psychology; Experiments; Medical Imaging; Ophthalmology; Optics; Psychology; Quantitative Psychology; Robotics; Robots; Scientific Imaging; Technology

Keywords:

stereoscopy; binocular vision; phoria; vergence; fusion; optometry; 3D; depth perception; spatial vision; display design; virtual camera; teleoperation; microstereopsis; orthostereopsis; virtual environment; simulator sickness; viewer discomfort

Mukherjee, AnuradhaEffect of Secondary Motor and Cognitive Tasks on Timed Up and Go Test in Older Adults
Doctor of Philosophy, University of Toledo, 2013, College of Health Sciences
Background: Changes in gait due to simultaneous performance of an attention-demanding dual task, such as walking while talking on the phone, may be caused by competing demands for attentional resources. As dual-tasking during gait is common, assessment of this paradigm is becoming highly important for mobility research. The objective of this study is to examine changes in gait and verbal response while performing a timed up and go test (TUG) with a concurrent cognitive task of counting backwards and a motor task of carrying a meal tray in order to understand the role of cognitive factors in gait-related balance control. Significance: Exploring the influence of attention processes on gait may represent an efficient way to- 1) improve the assessment of the falling risk among older adults, 2) help in promoting healthy senile life by focusing on cognitive development, and 3) improve post fall rehabilitation interventions by inclusion of cognitive tasks in addition to improving musculoskeletal impairments. Methods: Community dwelling healthy older adult volunteers participated in a timed get up and go (TUG) test while their movements were recorded by a 3D motion analysis system and force plates. Statistical Analysis: Multiple one-way repeated measures ANOVAs were performed for each of the kinematic and kinetic variables to examine differences across task conditions. Post hoc tests with Bonferroni corrections were conducted for significant main outcome variables. T tests were performed to assess cognitive skills. Results: The study successfully examined 15 healthy older adults between ages 65 and 88, scores of > 25 on MMSE test on their performance of dual task conditions in an attempt to get an insight into their gait and dynamic balance control system. We found that the most complex task situation of carrying the food tray and counting backward had the most adverse effect on gait performance in healthy older adults that resulted in 24% more time required to complete the TUG task, about 12% slowing of gait, 40% increase in double support time and a 10% decrease in the generation of medio-lateral ground reaction force, with conservation of cognitive task performance. However, balance control parameters like center of mass excursion and velocity in the frontal plane remained unaffected. Conclusions: Gait is an attention-demanding task, and a concurrent cognitive or motor task disrupts walking performance in community-dwelling older adults.

Committee:

Charles Armstrong (Advisor); Phillip Gribble (Committee Member); Martin Rice (Committee Member); Peggy Arnos (Committee Member)

Subjects:

Biomechanics; Health Care; Health Sciences; Kinesiology; Physical Therapy

Keywords:

older adults; dual task; balance; secondary; motor and cognitive; motor; cognitive; concurrent; visual 3d; v3d; gait analysis; elderly; 3d motion analysis; 3 dimensional; TUG; timed up and go; fall prevention; camera system; counting backward

Kephart, Ryan J.Comparison of See-and-Avoid Performance in Manned and Remotely Piloted Aircraft
Master of Science (MS), Ohio University, 2008, Electrical Engineering (Engineering and Technology)

See-and-avoid is the current FAA approved method for pilots to avoid objects and other aircraft while flying in visual meteorological conditions (VMC). Although fully autonomous ‘sense-and-avoid' or ‘detect-and-avoid' systems are in development, none are currently certified. Thus existing unmanned aerial vehicle (UAV) operations are limited to case-by-case restricted airspace or require escort by manned aircraft [1], [2]. Many UAVs are equipped with at least a forward-looking camera. In the transition between current technology and future fully autonomous, certified sense-and-avoid systems, it seems reasonable to require a ground-based operator to perform the see-andavoid function.

This thesis discusses the flight-testing performed to establish air traffic detection ranges for low-time pilots, and for a low-cost UAV camera system. The system was evaluated to determine if it could provide the equivalent see-and-avoid performance as the tested pilots.

Committee:

Michael Braasch, PhD (Advisor); Maarten Uijt de Haag, PhD (Committee Member); Zhen Zhu, PhD (Committee Member); William Kaufman, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

UAV; air traffic range; see and avoid; aircraft detection range; navigation; avionics, camera range

Snyder, Celeste JVideotaped Interrogations: Does a Dual-Camera Perspective Produce Unbiased and Accurate Evaluations?
Master of Science (MS), Ohio University, 2007, Psychology (Arts and Sciences)
Although research demonstrates that the least biased camera-perspective for filming interrogations and confessions is to focus the camera on the detective, police prefer to focus on the suspect. A potential compromise is a dual-camera perspective wherein the full faces of both the suspect and interrogator are presented in a split-screen format simultaneously. Results from Study 1 suggest that the dual-camera perspective eliminates observers’ tendency to judge videotaped confessions as more voluntary and the suspect more guilty compared to the suspect-focus perspective. However, Study 2 results suggest that observers less accurately identify true and false confessions using the dual-camera perspective than observers exposed to an interrogator-focus videotape, an audiotape, or a written transcript. Though a dual-camera perspective eliminates the camera perspective bias, it is not an appropriate compromise between the detective- and suspect-focus formats as it does not improve observers’ ability to accurately distinguish true from false confessions.

Committee:

G. Lassiter (Advisor)

Subjects:

Psychology, Social

Keywords:

camera perspective; confession evidence; interrogations; guilt; voluntariness

Alturki, Abdulrahman S.Principal Point Determination for Camera Calibration
Doctor of Philosophy (Ph.D.), University of Dayton, 2017, Electrical and Computer Engineering
Calibration is a fundamental task in computer vision, and it is critical for a growing number of applications. Identification of the principal point parameter is a significant step for calibration, because its accuracy will strongly effect the other parameters and the overall accuracy of the calibration. Additionally, some camera properties require slightly different definitions for the principal point. This work explores several novel techniques for highly accurate principal point estimation, all of which utilize simple planar checkerboard targets. First, an improved and automated corner detection algorithm is described. Checkerboard image corner points are located as saddle points, and the Hough transform is applied to remove spurious points and group them into rows and columns. The lines formed from these rows and columns are used to identify vanishing points. Multiple vanishing points lie along horizon lines, and two different techniques based on horizon lines are described for estimating the principal point. It is also possible to identify the principal point using images of a pair of checkerboards, one behind the other, that are nominally perpendicular to the camera’s optical axis. This problem requires additional corner-point processing to separate the two checkerboards in the images, and corrections are developed to handle orientation errors such as small rotations and translations between the checkerboards and image plane. Experimental results for these methods are presented and their accuracy and possible applications are discussed.

Committee:

John Loomis (Advisor)

Subjects:

Computer Science; Electrical Engineering

Keywords:

Camera Calibration, Computer Vision, Corner detection, hough transform, principal point,

Kee, Kok EngA Study of Flow Patterns and Surface Wetting in Gas-Oil-Water Flow
Doctor of Philosophy (PhD), Ohio University, 2014, Mechanical Engineering (Engineering and Technology)
Three-phase gas-oil-water flow is a common occurrence in the oil and gas industry. The presence of water in the pipeline can lead to internal corrosion if the free water, dissolved with corrosive species, comes into contact with the wall surface, a scenario known as 'water wetting.' With the introduction of a gas phase, the flow dynamics become much more complicated due to the varying degree of spatial distribution of the immiscible fluids. The present work addresses how the addition of a gas phase to the oil-water flow can change the flow dynamics and surface wetting behavior. The work mainly focuses on the hydrodynamic aspects of the flow and how they may affect the surface wetting in pipe flow. Experimental work was first carried out on oil-water systems to investigate flow patterns and surface wetting behavior in order to establish a baseline for the subsequent measurement of three-phase flow into which CO2 gas was introduced. The experiments were conducted in a large scale 0.1 m ID flow loop. Test fluids used were light model oil LVT200 and 1 wt.% aqueous NaCl. Flow pattern images were visually captured with a high speed video camera and surface wetting behavior was measured using conductivity pins. In oil-water flow, flow patterns can be divided into two broad categories dependent on whether the two immiscible liquids are dispersed or separated. Under those flow conditions, the surface wetting behavior can be categorized into four types of wetting regimes based on the intermittency of the wetting behavior as measured by the conductivity pins. In three-phase gas-oil-water flow, the effects of gas added to the oil-water system were investigated. Flow patterns and surface wetting were quantified at various liquid velocities, gas velocities and water cuts. At low water cut, the wetting results showed that adding the gas phase can help to keep water off the pipe wall, leading to oil wetting. At high water cut, water wetting prevailed and adding gas did not lessen the intensity of wetting. Tomographic techniques were employed to study the cross sectional distribution of the fluid phases in multiphase flow pipes. Knowing the strength and limitations, the techniques can be used for meaningful interpretation of flow patterns. They were not suited, however, for detecting water distribution at low water cut. A mechanistic three-phase water wetting model has been proposed and implemented. The model was built from the framework of the gas-liquid flow model and the oil-water wetting model. The model has been validated with the laboratory data for three different types of flow patterns.

Committee:

Srdjan NeŇ°ic, PhD (Advisor)

Subjects:

Chemical Engineering; Fluid Dynamics; Mechanical Engineering; Petroleum Engineering

Keywords:

flow patterns; surface wetting; water wetting; gas-oil-water flow; oil-water flow, three-phase flow; flow loop; conductivity pins; tomography; high speed video camera, predictive model

Collier, JeffExamining laser triangulation system performance using a software simulation
Master of Science (MS), Ohio University, 1998, Electrical Engineering & Computer Science (Engineering and Technology)

Examining laser triangulation system performance using a software simulation

Committee:

Joseph Nurre (Advisor)

Keywords:

laser triangulation system; CCD camera; Occlusion; software simulation

Ware, Lezlee J.Monitoring Visual Attention in Videotaped Interrogations: An Investigation of the Camera Perspective Bias
Master of Science (MS), Ohio University, 2006, Psychology (Arts and Sciences)

Illusory causation occurs when a more conspicuous target is incorrectly attributed greater causality. Two studies conducted within the context of videotaped police interrogations provide the first direct evidence that visually salient targets attract observers’ visual attention, recorded using eye tracking equipment, which leads to altered judgments of the targets. Study 1 demonstrated that a demand-characteristics interpretation of previous research was unsupported. Study 2 extended previous research shown to reduce the camera perspective bias by eliminating an alternative explanation for visual-channel competition, that hemispheric activation caused by a mental imagery task led participants to look away from the target in the video. Finally, regression analyses indicated that visual attention partially mediates the effect of target salience on observers’ judgments of confession voluntariness and the suspects’ likelihood of guilt, providing further evidence that the camera perspective bias takes hold early rather than later in the processing of visually salient information.

Committee:

G. Lassiter (Advisor)

Keywords:

Illusory Causation; Visual Salience; Camera Perspective Bias; Videotaped Police Interrogations; Visual Attention

Pillai, Mahesh RamachandranDeciphering the Link Between Polychlorinated Biphenyls, Immune Function and Exercise
Doctor of Philosophy (Ph.D.), Bowling Green State University, 2017, Biological Sciences
Polychlorinated biphenyls (PCBs) are environmental pollutants and endocrine disruptors, harmfully affecting reproductive, endocrine, neurological and immunological systems. This has implications for processes such as wound healing, which is modulated by the immunological response of the body. Conversely, while PCBs can be linked to diminished wound healing, outside of PCB pollution systems, exercise has been shown to accelerate wound healing. However, the potential for moderate intensity exercise to modulate or offset the harmful effects of a toxin like PCB are yet unknown. Exploration of this possible moderation on local immune response was achieved by measuring wound size and analyzing the concentrations of proinflammatory cytokines, interleukin-1ß (IL-1ß), interleukin-6 (IL-6), keratinocyte chemoattractant (KC), monocyte chemoattractant protein-1 (MCP-1), and tumor necrosis factor-a (TNF-a) in wounds (inflicted by punch biopsy) in mice that were not exercised as compared with those previously exercised at moderate intensity by running on a treadmill for 30min/day and then injected intraperitoneally with Aroclor 1254 (industrial mixture of PCB congeners) in doses of either 0, 100, 500 and 1000 ppm (wt/wt of mice). Mice were euthanized at Day 3 or Day 5 (n = 3-6) and skin excised from the wound area was homogenized and analyzed for cytokine content. Systemic effects of exercise on immune function in PCB exposed animals were examined by lipopolysaccharide (LPS) challenge (intraperitoneal injections) and analyzed by measuring the average body temperatures using a thermal imaging camera. Wound healing data revealed that in animals not exercised only the greatest dose of PCB (1000 µg/g) showed a pattern for faster wound healing. Exercise produced a pattern of more rapid wound healing rates compared to the animals administerd similar doses, except for animals administered 100 µg/g PCB. Concentrations of pro-inflammatory cytokines revealed patterns that could explain many of the changes observed in the wound healing rates and could be used as a good predictor of progression of wound healing. Average body temperature measurements revealed a febrinergic response to LPS stimulation only in the mice not exercised that were not administered PCB. In exercised mice this response was blunted, indicating lowered inflammatory response. Overall, exercise appears to have a beneficial effect on wound healing and LPS stimulation, only in animals not administered PCB. Recent studies show both PCB and exercise impair functioning of macrophages and that could have produced the similarity of results in the present study and needs to be further explored.

Committee:

Lee Meserve, Dr. (Advisor); Howard Cromwell, Dr. (Other); Todd Keylock, Dr. (Committee Member); Scott Rogers , Dr. (Committee Member); Vipaporn Phuntumart, Dr. (Committee Member)

Subjects:

Immunology; Kinesiology; Molecular Biology; Physiology; Toxicology

Keywords:

Polychlorinated Biphenyls; PCBs; Aroclor 1254; A1254; wound healing; cytokines; local immune response; systemic immune response; wound size; BioPlex assay; lipopolysaccharide; LPS; infrared camera; body temperature; macrophages; plasticity

Winner, Taryn LFURTHERING THE DEVELOPMENT OF SPECTROSCOPY FOR EDUCATION AND UNIQUE SAMPLING SITUATIONS
Doctor of Philosophy, Miami University, 2015, Chemistry
This dissertation describes five unique projects which advance spectroscopy in terms of education, technology and applications. Project goals were designed so the end results would assist with solving educational, military and industrial challenges through the use of spectroscopy. These goals were accomplished through the use of new, existing and adaptations of existing spectroscopic instrumentation. New spectroscopic instruments were developed and built for use in educational teaching laboratories at Miami University. These instruments included a low-cost Raman spectrograph and a low-cost flame atomic emission spectrometer. Both instruments have economic and educational benefits over commercial instruments and the flame emission instrument has already demonstrated its value in the classroom. The next project, driven by military interest, involved using near-infrared diffuse reflectance spectroscopy for the standoff detection of explosive materials on common surfaces. Projects included coating substrates with high energy materials and acquiring spectra from those coated substrates. Results showed differences in the detection limits for ammonium nitrate on various substrates, and these limits were directly dependent on the characteristics of the substrate. Another project was completed in collaboration with the United States Food and Drug Administration’s Forensic Chemistry Center. This project’s goal was to identify counterfeit goods based on their packaging materials. By taking a cross section of a packaging material and acquiring a high resolution attenuated total internal reflection (ATR) Fourier transform infrared (FTIR) image, differences could be revealed which showed variances between the authentic and counterfeit packaging materials. Finally, an adaptation of an ATR-FTIR microscope was used in an attempt to improve the spatial resolution of an ATR-FTIR image by working in a non-contact type sampling configuration. Improvements in spatial resolution have not yet been demonstrated.

Committee:

Andre Sommer (Advisor); Neil Danielson (Committee Chair); Jonathan Scaffidi (Committee Member); Tom Cambron (Committee Member); Paul Urayama (Committee Member)

Subjects:

Chemistry

Keywords:

spectroscopy; education; educational spectroscopy; ATR; ATR imaging; near IR; Raman; digital camera; standoff; flame emission; counterfeit detection; coatings; high energy materials

hart, charlesA Low-cost Omni-directional Visual Bearing Only Localization System
Master of Sciences, Case Western Reserve University, 2014, EECS - Computer and Information Sciences
RAMBLER Robot is designed to enable research on biologically inspired behavioral robot control algorithms. RAMBLER Robot tests the feasibility of autonomously localizing without typical sensors like wheel odometers or GPS. The primary objective is to independently, accurately, and robustly recover the path of a moving robotic system with only the lowest-cost sensors available off-the-shelf. Methods new and old are reviewed and tested on the real RAMBLER Robot hardware. The hardware and software necessary to use omni-directional camera measurements to decrease uncertainty regarding the position and heading of a small robot system are presented in detail. The RAMBLER Robot is shown to successfully localize within a small arena using three passive indistinguishable landmarks.

Committee:

Roger Quinn (Committee Chair); Francis Merat (Committee Member); Gregory Lee (Committee Member)

Subjects:

Computer Science; Robotics

Keywords:

omnicam; camera; omnidirectional; panoramic; catadioptric; spherical reflector; triangulation; power center; localization; particle filter; computer vision; raspberry pi; zumo; robot; robotics; low-cost; inexpensive; python; matlab; opencv;

Ware, Lezlee J.The Highs and Lows of Visual Salience and Status:Influential Factors in Source Monitoring Decisions
Doctor of Philosophy (PhD), Ohio University, 2009, Psychology (Arts and Sciences)

The current studies extended existing research on source monitoring decisions, the process of attributing the introduction of information to a given source (Johnson, 2006), by demonstrating visual salience and status shift source attributions across three studies. The incremental impact of each factor was identified through the use of Batchelder and Reifer's multinomial model for source monitoring (1990), which made it possible to demonstrate when memory processes and guessing biases were affected by each of these factors. Finally, this research looked at the relationship between visual salience and source monitoring attributions in an applied setting: videotaped police interrogations.

In the pilot study, participants were presented a videotaped interaction in one of three camera formats, manipulating the visual salience of the two actors. This led to corresponding shifts in source guessing. Thus, with simple stimuli, visual salience biases source guessing such that source attributions are higher for the most visually salient actor.

The second study systematically manipulated visual salience, status, and influence (an additional factor related to status) to demonstrate the additive effect of these variables. Visual salience, status, and influence were shown to primarily impact guessing biases (e.g., source guessing and recognition guessing). However, when these cues are combined in the right manner, memory processes can also be affected. Specifically, words presented by a visually salient, high status individual with greater influence will be recognized more often.

A videotaped interrogation was shown from three different camera perspectives within the third study. Within this context, where the detective is an individual who has much more influence and disproportionately higher status, the likelihood of recognizing words presented by the detective were increased; together high influence and status overwhelmed the effect of visual salience on source guessing meaning that observers are likely to guess that the detective is the source of introduced information regardless of visual salience. Finally, the relationship between source monitoring and perceptions of voluntariness and guilt were explored. Discussion considered the effect of preexisting schemas about interrogations, which likely exert independent influence on participant responses, and future research directions.

Committee:

G. Daniel Lassiter, PhD (Advisor); Francis Bellezza, PhD (Committee Member); Mark Alicke, PhD (Committee Member); Keith Markman, PhD (Committee Member); Shelly Robert, PhD (Committee Member)

Subjects:

Psychology; Social Psychology; Sociology

Keywords:

source monitoring; camera perspective bias; police interrogations; status; visual salience; confessions

Schmidt, Heather C.Effects of Interrogator Tactics and Camera Perspective Bias on Evaluations of Confession Evidence
Master of Science (MS), Ohio University, 2006, Psychology (Arts and Sciences)

The task of evaluating confession evidence, assigned to judges and frequently jurors, is a process fraught by perceptual bias and error. Evaluators are subject to the influence of situational constraints, including the focus of the camera recording the interrogation and the types of interrogation tactics employed. It was hypothesized that presentation format (i.e., suspect-focus video, equal-focus video, detective-focus video, transcript) would either help or hinder the observer in evaluating a confession featuring specific interrogation tactics (i.e., promise of leniency, threat of punishment, minimization, maximization, no tactic).

Although interactions were not obtained, differences attributable to the tactic’s valence (positive versus negative), subtle versus blatant nature of the pressure, gender of the evaluator, and the presentation format emerged independently. These findings suggest that the actions of the interrogator and the manner in which confession evidence is presented may influence critical judgments of a suspect’s guilt and the voluntariness of the confession.

Committee:

G. Lassiter (Advisor)

Subjects:

Psychology, Social

Keywords:

interrogation tactics; camera perspective; suspect-focus; presentation format; confession evidence; guilt; voluntariness; minimization

Raichur, RohanA Novel Technique to Improve the Resolution and Contrast of Planar Nuclear Medicine Imaging
Master of Science in Engineering, University of Akron, 2008, Biomedical Engineering
Nuclear medicine images have limited spatial resolution because of the limitations in the radiation detector and the associated electronics. Due to these limitations, the image of a point source of radiation is blurred. The degree of this blurring is called the point spread function of a gamma camera, the imaging device used in nuclear medicine. The aim of this study is to evaluate a technique to increase the spatial resolution and contrast by improving the point spread function of the system. The basic idea of the proposed technique is to restrict radiation so that overlapping point spreads due to adjacent point sources could be isolated. This was achieved by using a special lead mask, having a particular pattern of apertures, with the existing technique. The new technique was implemented in two variations by using special lead masks with different sizes, shapes and patterns of apertures in order to investigate the degree of improvement in the spatial resolution. The images obtained with the proposed technique were compared to those obtained with the conventional technique. Qualitative comparisons were made by visually inspecting the images obtained by the two techniques while quantitative comparisons were made by statistically testing their modulation transfer functions for significant differences. Both the comparisons indicated that the proposed technique was successful and gave images with increased spatial resolution and contrast.

Committee:

Dale Mugler, PhD (Advisor); Anthony Passalaqua, MD (Advisor)

Subjects:

Biomedical Research

Keywords:

nuclear medicine; gamma camera; spatial resolution; contrast; point spread function; modulation transfer function; full width half maximum; full width tenth maximum

Gillette, Matthew CharlesDesign and Implementation of an Inexpensive Fast Imaging System for Cold Atom Experiments
Master of Science, Miami University, 2014, Physics
A home built system for imaging optical lattices is presented. Our imaging system uses a repurposed astronomy camera - the complete system costs less than $5,000 while rivaling the performance of a commercially available system which costs $40-50k. The camera must have an extremely low dark current, high quantum efficiency, as well as the ability to take precisely timed millisecond exposures. Using LabVIEW a sequence of precise electronic pulses is created to control the laser beams in order to load the lattice structure with cold atoms. When running a LabVIEW VI at millisecond timescales Windows introduces inaccuracies in pulse timing. A master slave computer setup, called a real time target (RTT) is created in order to keep accuracy to the microsecond level.

Committee:

Samir Bali, PhD (Advisor); Perry Rice, PhD (Committee Member); Herbert Jaeger, PhD (Committee Member)

Subjects:

Physics

Keywords:

Fast Imaging, Cold Atoms, Ratchets, Levy Flights, camera

Mirzaei, GolrokhData Fusion of Infrared, Radar, and Acoustics Based Monitoring System
Doctor of Philosophy, University of Toledo, 2014, Engineering
Many birds and bats fatalities have been reported in the vicinity of wind farms. An acoustic, infrared camera, and marine radar based system is developed to monitor the nocturnal migration of birds and bats. The system is deployed and tested in an area of potential wind farm development. The area is also a stopover for migrating birds and bats. Multi-sensory data fusion is developed based on acoustics, infrared camera (IR), and radar. The diversity of the sensors technologies complicated its development. Different signal processing techniques were developed for processing of various types of data. Data fusion is then implemented from three diverse sensors in order to make inferences about the targets. This approach leads to reduction of uncertainties and provides a desired level of confidence and detail information about the patterns. This work is a unique, multifidelity, and multidisciplinary approach based on pattern recognition, machine learning, signal processing, bio-inspired computing, probabilistic methods, and fuzzy reasoning. Sensors were located in the western basin of Lake Erie in Ohio and were used to collect data over the migration period of 2011 and 2012. Acoustic data were collected using acoustic detectors (SM2 and SM2BAT). Data were preprocessed to convert the recorded files to standard wave format. Acoustic processing was performed in two steps: feature extraction, and classification. Acoustic features of bat echolocation calls were extracted based on three different techniques: Short Time Fourier Transform (STFT), Mel Frequency Cepstrum Coefficient (MFCC), and Discrete Wavelet Transform (DWT). These features were fed into an Evolutionary Neural Network (ENN) for their classification at the species level using acoustic features. Results from different feature extraction techniques were compared based on classification accuracy. The technique can identify bats and will contribute towards developing mitigation procedures for reducing bat fatalities. Infrared videos were collected using thermal IR camera (FLIR SR 19). Pre-processing was performed to convert infrared videos to frames. Three different background subtraction techniques were applied to detect moving objects in IR data. Thresholding was performed for image binarization using extended Otsu Threshold. Morphology was performed for noise suppression and filtering. Results of three different techniques were then compared. Selected technique (Running Average) followed by thresholding and filtering is then used for tracking and information extraction. Ant based Clustering Algorithm (ACA) based on Lumer and Faieta with its three different variations including Standard ACA, Different Speed ACA, and Short Memory ACA were implemented over extracted features and were compared in terms of different groups created for detected avian data. Fuzzy C Means (FCM) was implemented and used to group the targets. Radar data were collected using Furuno marine radar (XANK250) with T-bar antenna and parabolic dish. The target detection was processed using radR which is open source platform available for recording and processing radar data. This platform was used to remove clutter and noise, detect possible targets in terms of blip, and save the blips information. The tracking algorithm was developed based on estimation and data association, independent from radR. Estimation is performed using Sequential Importance Sampling-based Particle Filter (SIS-PF) and data association is performed using the Nearest Neighbors (NN). The data fusion was performed in a heterogeneous dissimilar sensory environment. This is a challenging environment which needs many efforts in both setting up experiments and algorithmic development. Setting up experiments includes preparation of the equipment including purchase of the required equipment, installing the systems, configuration, and control parameter setting. The algorithmic development includes developing algorithms and use of the best available technique for this specific application. Various trade-off of time, accuracy, and cost were considered. Data fusion of the acoustics/IR/radar is a hierarchical model based on two levels: Level 1 and Level 2. Level 1 is a homogenous dissimilar fusion based on feature level fusion. Level 2 is a heterogeneous fusion and is based on decision level fusion. The feature level is employed on the IR and radar data and combines the features of detected /tracked targets into a composite feature vector. The constructed feature vector is an end-to-end individual sensors’ feature vector which serves as an input to the next level. The second level is a decision level, which uses the feature vector from L1 and fuses the data with acoustic data. The fusion was developed based on number of fusion functions. Data alignment including temporal and spatial alignment, and target association was implemented. A fuzzy Bayesian fusion technique was developed for decision level fusion, the fuzzy inference system provides the priori probability, and Bayesian inference provides posteriori probability of the avian targets. The result of the data fusion was used to process the spring and fall 2011 migration time in the western basin of Lake Erie in Ohio. This area is a landscape is in the prevailing wind and is putative for wind turbine construction. Also this area is a stopover for migrant birds/bats and the presence of wind turbines may threatened their habitats and life. The aim of this project is to provide an understanding of the activity and behavior of the biological targets by combining three different sensors and provide a detail and reliable information. This work can be extend to other application of military, industry, medication, traffic control, etc.

Committee:

Mohsin Jamali, Dr. (Committee Chair); Jackson Carvalho, Dr. (Committee Member); Mohammed Niamat, Dr. (Committee Member); Richard Molyet, Dr. (Committee Member); Mehdi Pourazady, Dr. (Committee Member)

Subjects:

Biology; Computer Engineering; Computer Science; Ecology; Electrical Engineering; Energy; Engineering

Keywords:

Acoustics,Evolutionary Neural Network,Infrared Camera,Radar,Data Fusion,Clustering, Classification,Supervised Learning,Unsupervised Learning,Feature Extraction,Bat Echolocation Call,Wind Turbine,Bird Mortality,Fuzzy,Bayesian,Detection,Identification

Lung, Joni M.Mink and Raccoon Use of Wetlands as Influenced by Wetland and Landscape Characteristics in Central Ohio
Master of Science, The Ohio State University, 2008, Evolution, Ecology, and Organismal Biology
The goal of this study was to assess mink (Mustela vison) and raccoon (Procyon lotor) use of wetlands relative to wetland and adjacent landscape characteristics in central Ohio. Track plate stations were set at 25 m intervals along 125 - 225 m transects at six study areas in central Ohio. Predator lure was placed in the center of track plates, which were made of 71 x 71 cm sheets of aluminum coated with black carpenter’s chalk. Track plates were examined daily for tracks during two survey periods, a Multi-season and Spring Survey. Usage by mink and raccoon at each wetland was quantified with an index of percent visitation of plates per transect per night for each species. One camera station was established at each study area to assess the use of passive infrared digital cameras for detection of mink and raccoon. The Ohio Rapid Assessment Method (ORAM) wetland score, maximum water depth and average buffer width around the wetland were recorded for each study area. Distance to stream (perennial and intermittent), percent canopy cover, human population, and number of households were calculated with GIS software. Mink were observed at four out of six study areas with percent visitation during the Multi-season (8.0, 5.7, 3.1, 2.4%) and Spring (18.8, 12.5, 7.5, 5%) survey. Raccoon presence was detected at all six study areas with percent visitation during the Multi-season (42.4, 38.1, 36.4, 19.2, 11.4, 8.9%) and Spring (42.5, 31.3, 21.9, 12.5, 11.2, 7.5%). Mink were negatively correlated with human population (p=0.05) during the Multi-season Survey and negatively correlated with distance to nearest stream (p=0.05) during the Spring Survey. No significant correlations were found between raccoons and habitat characteristics. Mink were not recorded with cameras on any of the six study areas in either survey period, but raccoons were recorded at all six study areas. High raccoon densities may decrease the resolution of visitation rates as an indicator of wetland use, whereas lower mink densities may increase the resolution of visitation rates to detect changes in use. Mink appear to avoid the lowest quality wetlands and those wetlands located far from riparian areas.

Committee:

John Harder, PhD (Advisor)

Subjects:

Ecology

Keywords:

mink; raccoon; wetland; track plate; camera

Bachnak, Rafic A.Development of a stereo-based multi-camera system for 3-D vision
Doctor of Philosophy (PhD), Ohio University, 1989, Computer Science (Engineering)

Many important applications of computer vision are found in manufacturing and defense industries. Such applications include inspection, measurements, robotic assembly and autonomous vehicle guidance. The area of three-dimensional vision presents great challenges and complex problems. It has attracted considerable research efforts in recent years. Although significant progress has been made, the transfer of fundamental ideas to concrete applications has not been successful at the practical implementation stage. This is due to the complexities involved in the process of emulating the human vision capability in a computer system.

In this dissertation, a stereo-based multi-camera system for complete 3-D information extraction and object surface reconstruction in a robot workspace is developed. The system consists of N number of cameras arranged in an N/2 number of periodic stereo pair structure. The cameras sense the working area of a robot in the form of N images which are processed to obtain the 3-D data in the robot's environment. The extracted information is provided to a surface reconstruction algorithm for object description. The image reconstruction phase is performed in the scene domain on the combined data of adjacent camera pairs.

The system advances the 3-D vision capability of industrial robots. Specific contributions include a camera calibration procedure that determines the system's parameters directly from the output digital image using only three known world points. This procedure uses a pinhole camera model and assumes a linear image transformation process between the image plane of the camera and the output digital image. The parameters of the system are computed by solving a set of linear equations.

The number of cameras for entire coverage of the robot workspace is determined by defining the common area of a camera. This is essential because the 3-D information of any portion of the object that does not appear in both cameras can not be recovered. The working environment of the robot is described based on the knowledge of the common area and the number of camera pairs used.

The shifting property of the Fourier transform is utilized for disparity estimation. The result reduces the cost and increases the accuracy of the matching procedure. This is true because the matching process is directly proportional to the search limits in the other image. In addition, the matching is only performed in the common area of a camera pair which is determined from the geometry of the set-up.

Matching of stereo image pairs is also addressed. An image matching technique that makes use of the estimated disparity is developed. The method combines the advantages of both the area-based and feature-based approaches. The feature-based matching results guide a local window operation that identifies correct matches within a neighborhood. The local matching measure is based on the smoothness in disparity values in neighboring pixels on the surface of the object.

Surface reconstruction for complete object representation is performed in the scene domain. In this regard, the Lagrangian polynomial is employed locally to approximate the object points based on the available data. Here, the known depth points retain their original values. This initialization process improves the convergence rate and the performance of the quadratic variations technique.

Finally, an algorithm for corner detection on digital curves was developed and employed for object representation to test the efficiency and reliability of the developed system for 3-D measurements. First, the thinned image is scanned to assign candidate corners. Then, false corners are eliminated by operating locally at the initial assignments. The final result is a list of corners, each identified by its position, and the number and direction of the edges intersecting at its center. The algorithm is capable of finding two and multiple-side corners and is suitable for parallel implementation.

Committee:

M. Celenk (Advisor)

Keywords:

stereo-based; multi-camera system; 3-D vision; manufacturing computers; digital; robot

Next Page