Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 5)

Mini-Tools

 
 

Search Report

  • 1. Jones, Ryan OFE-EBS — An Optical Flow Equation-Inspired Event-Based Sensor for Low-Earth Orbit Ground Moving Target Indication

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Event-based sensors (EBS) report pixel-asynchronous changes in scene intensity called events. Hence, its sparse event stream is well-suited for many computer vision tasks— particularly object tracking. However, relative motion between the sensor and scene will generate extraneous events caused by the translation of static scene features across the sensor. We present OFE-EBS, an optical flow equation-inspired event-based sensor for low-Earth orbit (LEO) ground moving target indication (GMTI). Owing to the predictable velocity of a satellite platform in LEO, we augment the EBS pixel with additional cross-row subtraction hardware to remove static background features. Pixel adaptivity is modified to ensure dynamic foreground features generate fewer events, further reducing event rate. Finally, using our analytical sensor model, we show that OFE-EBS outperforms conventional EBS in spatial resolution and event rate, considering the effects of pixel nonuniformity.

    Committee: Keigo Hirakawa (Committee Chair); Partha Banerjee (Committee Member); Bradley Ratliff (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 2. AlSattam, Osama Noise Robust Particle Event Velocimetry with A Kalman Filter-Based Tracking

    Doctor of Philosophy (Ph.D.), University of Dayton, 2024, Engineering

    Event-based pixel sensors asynchronously report changes in log-intensity in microsecond-order resolution. Its exceptional speed, cost effectiveness, and sparse event stream makes it an attractive imaging modality for particle tracking velocimetry. In this work, we propose a causal Kalman filter-based particle event velocimetry (KF-PEV). Using the Kalman filter model to track the events generated by the particles seeded in the flow medium, KF-PEV yields the linear least squares estimate of the particle track velocities corresponding to the flow vector field. KF-PEV processes events in a computationally efficient and streaming manner (i.e.~causal and iteratively updating). Our simulation-based benchmarking study with synthetic particle event data confirms that the proposed KF-PEV outperforms the conventional frame-based (FB) particle image/tracking velocimetry (PIV/PTV) as well as the state-of-the-art event-based (EB) particle velocimetry methods. In a real-world water tunnel event-based sensor data experiment performed on what we believe to be the widest field view ever reported, KF-PEV accurately predicted the expected flow field of the SD7003 wing, including details such as the lower velocity in the wake and the flow separation around the underside of an angled wing.

    Committee: Keigo Hirakawa (Committee Chair) Subjects: Aerospace Engineering; Electrical Engineering
  • 3. Wolf, Abigail Event Camera Applications for Driver-Assistive Technology

    Master of Science in Computer Engineering, University of Dayton, 2022, Electrical and Computer Engineering

    We propose an Event-Based Snow Removal algorithm called EBSnoR. We developed a technique to measure the dwell time of snowflakes on a pixel using event-based camera data, which is used to carry out a Neyman-Pearson hypothesis test to partition event stream into snowflake and background events. The effectiveness of the proposed EBSnoR was verified on a new dataset called UDayton22EBSnow, comprised of front-facing event-based camera in a car driving through snow with manually annotated bounding boxes around surrounding vehicles. Qualitatively, EBSnoR correctly identifies events corresponding to snowflakes; and quantitatively, EBSnoR-preprocessed event data improved the performance of event-based car detection algorithms.

    Committee: Keigo Hirakawa (Committee Chair); Vijayan Asari (Committee Member); Bradley Ratliff (Committee Member) Subjects: Computer Engineering
  • 4. Baldwin, Raymond High-speed Imaging with Less Data

    Doctor of Philosophy (Ph.D.), University of Dayton, 2021, Electrical Engineering

    A primary bottleneck in video processing is the readout of large sensor arrays. Typical video contains highly correlated information, which goes unexploited in traditional imaging devices. This research focuses on two revolutionary hardware designs that eliminate the need for large data handling and bypass the readout of sparse information in large arrays. First, this research proposes a novel representation for event cameras called TORE volumes and demonstrates several advantages over current methods (e.g. prioritized encoding, low computational cost, and temporal consistency). This makes the proposed method an ideal replacement for any machine learning solution that struggles to encode sparse event data into a meaningful dense tensor. TORE volumes are evaluated using several public datasets and achieve state-of-the-art performance for human pose estimation, image reconstruction, event denoising, and classification. Second, this research designs and constructs a prototype Fourier camera that compresses high-speed video in real time. Furthermore, this research evaluates several design parameters, and processing algorithms necessary to capture high-speed video including camera calibration, temporal demosaicking, and frame reconstruction. Fourier cameras perform real-time, hardware-based encoding during a single camera integration via spatial light modulation and use temporal filter arrays to sample time-related information (similar to how color filter arrays sample spectral information in standard cameras). A prototype design is constructed and evaluated against a traditional high-speed camera—achieving 4,000fps with 16× compression. The prototype design serves as an excellent proof of concept for future designs such as on-chip temporal filter arrays.

    Committee: Vijayan Asari (Advisor); Keigo Hirakawa (Committee Member); Theus Aspiras (Committee Member); Bryan Steward (Committee Member) Subjects: Computer Engineering; Scientific Imaging
  • 5. Almatrafi, Mohammed Optical Flow for Event Detection Camera

    Doctor of Philosophy (Ph.D.), University of Dayton, 2019, Electrical and Computer Engineering

    Optical flow (OF) which refers to the task of determining the apparent motion of objects in the scene has been a topic of core interest in commuter vision for the past three decades. Optical flow methods of conventional camera struggle in the presence of large motion and occlusion due to slow frame rates. Optical flow of dynamic vision sensor (DVS) has gained attention recently as a way to overcome these shortcomings. DVS known also as event detection camera emerged recently as an alternative to a conventional camera by replacing a fixed analog-to-digital (A/D) converter with a floating asynchronous circuit. Rather than reporting a pixel intensity at a fixed time interval, the event detection cameras report only the significant changes (i.e. above threshold) to the pixel intensity (the “events”) and the time that such event occurs. Such circuit significantly reduces the communication bandwidth of the camera, enabling the operation at equivalent of roughly 80,000 frames per second. In addition, the floating A/D converter may adapt to extremely high dynamic range, making it suitable for applications in automotives and scientific instruments. However, the sparsity of the output data renders existing image processing and computer vision methods useless. For example, the “brightness constancy constraint” that is at the heart of optical flow does not apply to the edge-like features that event detection cameras output, and the very notion of “frames” is absent in the asynchronous outputs. In this work, we consider a new sensor called DAViS that combines the conventional active pixel sensor (APS) and DVS circuitries, yielding a conventional intensity image frames as well as the events. We propose three novel optical flow methods: First, We propose a novel optical flow method designed specifically for a DAViS camera that leverages the high spatial fidelity of intensity image frames and the high temporal resolution of events generated by DVS. Hence, the proposed DAViS-OF me (open full item for complete abstract)

    Committee: Keigo Hirakawa (Committee Chair) Subjects: Electrical Engineering; Engineering