Rey Pocius

I am a National Science Foundation Graduate Research Fellow and PhD student at the University of Southern California, where I am working with Cyrus Shahabi in the Integrated Media Systems Center and the Information Laboratory (InfoLAB) .

I completed my Bachelors in Computer Science at Oregon State University. While at Oregon State I worked under Dr. Bill Smart in the Personal Robotics Lab. There I conducted research on dimensionality reduction for reinforcement learning.

I worked as an intern and student contractor at the United States Naval Research Laboratory in the Navy Center for Applied Research in Artificial Intelligence on multi-task reinforcement learning. I then conducted research on explainable artificial intelligence (XAI) as a part of the DARPA XAI program. I worked on both the team at NRL and the team at Oregon State University under Dr. Alan Fern.

In addition to my research, I am involved with the BOTS (Building Opportunities with Teachers in Schools) program at USC and help create scalable & affordable inschool robotics/coding to help foster student computational thinking skills.

Email  /  Google Scholar  /  LinkedIn

Research

My current research interests are in machine learning, deep learning, spatial computing, and privacy.

Education
PhD Computer Science, USC August 2019 - Current

MS Computer Science, USC August 2019 - May 2021

BS Computer Science, Oregon State University September 2015 - June 2019

Videos

Oregon State University Experience

Publications
clean-usnob Communicating Robot Goals via Haptic Feedback in Manipulation Tasks
<Rey Pocius, Naghmeh Zamani, Heather Culbertson, Stefanos Nikolaidis
HRI Pioneers Workshop, HRI '20 Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 591-593

In shared autonomy, human teleoperation blends with intelligent robot autonomy to create robot control. This combination enables assistive robot manipulators to help human operators by predicting and reaching the human's desired target. However, this reduces the control authority of the user and the transparency of the interaction. This negatively affects their willingness to use the system. We propose haptic feedback as a seamless and natural way for the robot to communicate information to the user and assist them in completing the task. A proof-of-concept demonstration of our system illustrates the effectiveness of haptic feedback in communicating the robot's goals to the user. We hypothesize that this can be an effective way to improve performance in teleoperated manipulation tasks, while retaining the control authority of the user.

clean-usnob Neural networks for incremental dimensionality reduced reinforcement learning
William Curran, Rey Pocius, Bill Smart
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), 1559-1565

State-of-the-art personal robots must perform complex manipulation tasks to be viable in assistive scenarios. However, many of these robots, like the PR2, use manipulators with high degrees-of-freedom. The complexity of these robots lead to large dimensional state spaces, which are difficult to fully explore. Our previous work introduced the IDRRL algorithm, which compresses the learning space by transforming a high-dimensional learning space onto a lower-dimensional manifold while preserving expressivity. In this work we formally prove that IDRRL maintains PAC-MDP guarantees. We then improve upon our previous formulation of IDRRL by introducing cascading autoencoders (CAE) for dimensionality reduction, producing the new algorithm IDRRL-CAE. We demonstrate the improvement of this extension over our previous formulation, IDRRL-PCA, in the Mountain Car and Swimmers domains.

clean-usnob Comparing Reward Shaping, Visual Hints, and Curriculum Learning
Rey Pocius, David Isele, Mark Roberts, David W. Aha
Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 8135-8136

Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum. Yet few studies examine how they compare to each other, when one might prefer one approach, or how they may complement each other. As a first step in this direction, we compare reward shaping, hints, and curricula for a Deep RL agent in the game of Minecraft. We seek to answer whether reward shaping, visual hints, or the curricula have the most impact on performance, which we measure as the time to reach the target, the distance from the target, the cumulative reward, or the number of actions taken. Our analyses show that performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, the results suggest that designing an effective curriculum and providing appropriate hints most improve the performance. Common approaches to learn complex tasks in reinforcement learning include reward shaping, environmental hints, or a curriculum, yet few studies examine how they compare to each other. We compare these approaches for a Deep RL agent in the game of Minecraft and show performance is most impacted by the curriculum used and visual hints; shaping had less impact. For similar navigation tasks, this suggests that designing an effective curriculum with hints most improve the performance.

clean-usnob Strategic Tasks for Explainable Reinforcement Learning
Rey Pocius, Lawrence Neal, Alan Fern
Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), 10007-10008

Commonly used sequential decision making tasks such as the games in the Arcade Learning Environment (ALE) provide rich observation spaces suitable for deep reinforcement learning. However, they consist mostly of low-level control tasks which are of limited use for the development of explainable artificial intelligence(XAI) due to the fine temporal resolution of the tasks. Many of these domains also lack built-in high level abstractions and symbols. Existing tasks that provide for both strategic decision-making and rich observation spaces are either difficult to simulate or are intractable. We provide a set of new strategic decision-making tasks specialized for the development and evaluation of explainable AI methods, built as constrained mini-games within the StarCraft II Learning Environment.

Demonstrations

Robot-Assisted Hair Brushing

33rd Conference on Neural Information Processing Systems NeurIPS 2019 Demo

Outreach
clean-usnob Building Early Elementary Teacher Confidence in Teaching Computer Science Through a Low-Cost, Scalable Research-Practitioner Collaboration
Justin Clough, Patricia Chaffey, Gautam Salhotra, Colin G. Cess, Rey Pocius, Dr. Katie Mills
2020 ASEE Annual Conference and Exposition


(website code from this guy)