Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Massimino, Brett Operational Factors Affecting the Confidentiality of Proprietary Digital Assets

    Doctor of Philosophy, The Ohio State University, 2014, Business Administration

    The leakage of an organization's proprietary, digital assets to unauthorized parties can be a catastrophic event for any organization. The magnitude of these events have been recently underscored by the Target data breach, in which 70 million consumer credit card accounts were compromised, and financial costs are expected to exceed $1 billion. Digital assets have steadily progressed beyond low-value data and information, and into high-value knowledge-based domains. Failures to protect these latter types of digital assets can have even greater implications for firms or even macroeconomic conditions. Using the Target event as an illustrative motivation, we highlight the importance of two relatively-unexplored topics within the domain of digital asset protections - (1) vendor management, and (2) worker adherence to standard, well-codified procedures and technologies. We explicitly consider each of these topics through the separate empirical efforts detailed in this dissertation. Our first empirical effort examines the effects of sourcing and location decisions on the confidentiality of digital assets. We frame our study within a product-development dyad, with a proprietary, digital asset being shared between partners. We treat confidentiality as a performance dimension that is influenced by each organization accessing the asset. Specifically, we empirically investigate the realm of electronic video game development and the illegal distribution activities of these products. We employ a series of web-crawling data collection programs to compile an extensive secondary dataset covering the legitimate development activities for the industry. We then harvest data from the archives of a major, black-market distribution channel, and leverage these data to derive a novel, product-level measure of asset confidentiality. We examine the interacting factors of industrial clustering (agglomeration) and national property rights legislations in affecting this confidentiality m (open full item for complete abstract)

    Committee: John Gray (Advisor); Kenneth Boyer (Advisor); James Hill (Committee Member); Elliot Bendoly (Committee Member) Subjects: Business Administration
  • 2. Daughety, Nathan Design and analysis of a trustworthy, Cross Domain Solution architecture

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Computer Science and Engineering

    With the paradigm shift to cloud-based operations, reliable and secure access to and transfer of data between differing security domains has never been more essential. A Cross Domain Solution (CDS) is a guarded interface which serves to execute the secure access and/or transfer of data between isolated and/or differing security domains defined by an administrative security policy. Cross domain security requires trustworthiness at the confluence of the hardware and software components which implement a security policy. Security components must be relied upon to defend against widely encompassing threats -- consider insider threats and nation state threat actors which can be both onsite and offsite threat actors -- to information assurance. Current implementations of CDS systems use sub-optimal Trusted Computing Bases (TCB) without any formal verification proofs, confirming the gap between blind trust and trustworthiness. Moreover, most CDSs are exclusively operated by Department of Defense agencies and are not readily available to the commercial sectors, nor are they available for independent security verification. Still, more CDSs are only usable in physically isolated environments such as Sensitive Compartmented Information Facilities and are inconsistent with the paradigm shift to cloud environments. Our purpose is to address the question of how trustworthiness can be implemented in a remotely deployable CDS that also supports availability and accessibility to all sectors. In this paper, we present a novel CDS system architecture which is the first to use a formally verified TCB. Additionally, our CDS model is the first of its kind to utilize a computation-isolation approach which allows our CDS to be remotely deployable for use in cloud-based solutions.

    Committee: John Franco Ph.D. (Committee Member); John Emmert Ph.D. (Committee Member); Marcus Dwan Pendleton Ph.D. (Committee Member); Nan Niu Ph.D. (Committee Member); Rashmi Jha Ph.D. (Committee Member) Subjects: Computer Science
  • 3. Sharma, Sagar Towards Data and Model Confidentiality in Outsourced Machine Learning

    Doctor of Philosophy (PhD), Wright State University, 2019, Computer Science and Engineering PhD

    With massive data collections and needs for building powerful predictive models, data owners may choose to outsource storage and expensive machine learning computations to public cloud providers (Cloud). Data owners may choose cloud outsourcing due to the lack of in-house storage and computation resources or the expertise of building models. Similarly, users, who subscribe to specialized services such as movie streaming and social networking, voluntarily upload their data to the service providers' site for storage, analytics, and better services. The service provider, in turn, may also choose to benefit from ubiquitous cloud computing. However, outsourcing to a public cloud provider may raise privacy concerns when it comes to sensitive personal or corporate data. Cloud and its associates may misuse sensitive data and models internally. Moreover, if Cloud's resources are poorly secured, the confidential data and models become vulnerable to privacy attacks by external adversaries. Such potential threats are out of the control of the data owners or general users. One way to address these privacy concerns is through confidential machine learning (CML). CML frameworks enable data owners to protect their data with encryption or other data protection mechanisms before outsourcing and facilitates Cloud training the predictive models with the protected data. Existing cryptographic and privacy-protection methods cannot be immediately lead to the CML frameworks for outsourcing. Although theoretically sound, a naive adaptation of fully homomorphic encryption (FHE) and garbled circuits (GC) that enable evaluation of any arbitrary function in a privacy-preserving manner is impractically expensive. Differential privacy (DP), on the other hand, cannot specifically address the confidentiality issues and threat model in the outsourced setting as DP generally aims to protect an individual's participation in a dataset from an adversarial model consumer. Moreover, a practical CM (open full item for complete abstract)

    Committee: Keke Chen Ph.D. (Advisor); Xiaoyu Lu Ph.D. (Committee Member); Krishnaprasad Thirunarayan Ph.D. (Committee Member); Junjie Zhang Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science