Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Gogineni, Venkatsampath Raja Goal Management in Multi-agent Systems

    Doctor of Philosophy (PhD), Wright State University, 2021, Computer Science and Engineering PhD

    Autonomous agents in a multi-agent system coordinate to achieve their goals. However, in a partially observable world, current multi-agent systems are often less effective in achieving their goals. In much part, this limitation is due to an agent's lack of reasoning about other agents and their mental states. Another factor is the agent's inability to share required knowledge with other agents and the lack of explanations in justifying the reasons behind the goal. This research addresses these problems by presenting a general approach for agent goal management in unexpected situations. In this approach, an agent applies three main concepts: goal reasoning - to determine what goals to pursue and share; theory of mind - to select an agent(s) for goal delegation; explanation - to justify to the selected agent(s) the reasons behind the delegated goal. Our approach presents several algorithms required for goal management in multi-agent systems. We demonstrate that these algorithms will help agents in a multi-agent context better manage their goals and improve their performance. In addition, we evaluate the performance of our multi-agent system in a marine life survey domain and a rover domain. Finally, we compare our work to different multi-agent systems and present empirical results that support our claim.

    Committee: Michael T. Cox Ph.D. (Committee Co-Chair); Mateen M. Rizki Ph.D. (Committee Co-Chair); Matthew Mollineaux Ph.D. (Committee Member); Michael Raymer Ph.D. (Committee Member); Tanvi Banerjee Ph.D. (Committee Member) Subjects: Computer Science
  • 2. Kondrakunta, Sravya Complex Interactions between Multiple Goal Operations in Agent Goal Management

    Doctor of Philosophy (PhD), Wright State University, 2021, Computer Science and Engineering PhD

    A significant issue in cognitive systems research is to make an agent formulate and manage its own goals. Some cognitive scientists have implemented several goal operations to support this issue, but no one has implemented more than a couple of goal operations within a single agent. One of the reasons for this limitation is the lack of knowledge about how various goals operations interact with one another. This thesis addresses this knowledge gap by implementing multiple-goal operations, including goal formulation, goal change, goal selection, and designing an algorithm to manage any positive or negative interaction between them. These are integrated with a cognitive architecture called MIDCA and applied in five different test domains. We will compare and contrast the architecture's performance with intelligent interaction management with a randomized linearization of goal operations.

    Committee: Michael T. Cox Ph.D. (Committee Co-Chair); Mateen M. Rizki Ph.D. (Committee Co-Chair); Matthew M. Molineaux Ph.D. (Committee Member); Michael L. Raymer Ph.D. (Committee Member); Michelle A. Cheatham Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 3. Eyorokon, Vahid Measuring Goal Similarity Using Concept, Context and Task Features

    Master of Science (MS), Wright State University, 2018, Computer Science

    Goals can be described as the user's desired state of the agent and the world and are satisfied when the agent and the world are altered in such a way that the present state matches the desired state. For physical agents, they must act in the world to alter it in a series of individual atomic actions. Traditionally, agents use planning to create a chain of actions each of which altering the current world state and yielding a new one until the final action yields the desired goal state. Once this goal state has been achieved, the goal is said to have been satisfied. Since these goals involve physical actions, we can describe these goals as being physical goals. Our work focuses on a special type of goal that doesn't exist physically and are knowledge goals. Much like physical goals, knowledge goals also have a desired state but this desired state is of the user's understanding. Once the user has learned the missing information, the knowledge goal has been satisfied. While physical goals are given to agents who must then produce a plan of actions to alter the world, knowledge goals are given to an agent who must then produce a sequence of intermediate knowledge goals to alter the user's state of knowledge. Much like how individual actions comprise a plan to alter the physical world, individual questions comprise a goal trajectory and alter the state of a user's knowledge. This overall path of inquiry is much like that of an investigation for knowledge not unlike those of a detective or investigator. Given that not all users learn the same way, creating a plan to solve a knowledge goal is not a trivial task. Furthermore, in complex domains, it is not immediately clear to user themselves what their knowledge goal is as they continue to understand how to phrase the correct questions. As the user continues to refine their questions, their search grows in length and often in complexity as questions become increasingly specific. To address these issues, we created and evalu (open full item for complete abstract)

    Committee: Michelle Cheatham Ph.D. (Advisor); Michael Cox Ph.D. (Committee Member); Michael Raymer Ph.D. (Committee Member) Subjects: Computer Science