About

I am a postdoctoral research associate and lab manager for the Kavraki Lab at Rice University under the direction of Dr. Lydia Kavraki. During my Ph.D., I was funded by a NASA Space Technology Research Fellowship and worked with the Robonaut 2 team at NASA JSC. My research interests are in robot motion planning and long-horizon robot autonomy, with focus on manipulation planning, planning with constraints, and hardware and software for planning.

More details can be found in my curriculum vitae .


News

ICRA 2024

I had three papers accepted to ICRA 2024! Please take a look at:

Reach out if you want to meet up in Yokohama!

Scaling Multi-Modal Planning

I've published a T-RO article that extends my prior work on MMP to complex systems. Now Robonaut 2 can walk across handrails!

EMPP @ IROS 2022

I co-organized the Evaluating Motion Planning Performance Workshop at IROS 2022! Please watch the talks and read the papers of our speakers and contributors.


Code

VAMP

The source code for the vector-accelerated motion planner (VAMP) is coming soon! Please stayed tuned to this repository for updates.

Robowflex

Robowflex, a high-level C++ library for MoveIt is now available here! Robowflex makes using MoveIt simple and has been used in a number of publications. Take a look at the associated paper and documentation online.

MotionBenchMaker

MotionBenchMaker is a tool to generate datasets for motion planning, and contains pre-generated realistic datasets for evaluating planning. MotionBenchMaker is available here. Take a look at the associated paper.

Constrained Planning in OMPL

The generic constrained planning framework presented in ISRR 2017 and IJRR 2019 is now available in OMPL! Take a look at the overview and the tutorial. Now available in MoveIt!


Publications

Peer-Reviewed Journal Articles

2023

J6.

Solving Rearrangement Puzzles using Path Defragmentation in Factored State Spaces
S. Bora Bayraktar, Andreas Orthey, Zachary Kingston, Marc Toussaint, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Rearrangement puzzles are variations of rearrangement problems in which the elements of a problem are potentially logically linked together. To efficiently solve such puzzles, we develop a motion planning approach based on a new state space that is logically factored, integrating the capabilities of the robot through factors of simultaneously manipulatable joints of an object. Based on this factored state space, we propose less-actions RRT (LA-RRT), a planner which optimizes for a low number of actions to solve a puzzle. At the core of our approach lies a new path defragmentation method, which rearranges and optimizes consecutive edges to minimize action cost. We solve six rearrangement scenarios with a Fetch robot, involving planar table puzzles and an escape room scenario. LA-RRT significantly outperforms the next best asymptotically-optimal planner by 4.01 to 6.58 times improvement in final action cost.


Close


@article{bayraktar2023,
 author = {S. Bora Bayraktar and Andreas Orthey and Zachary Kingston and Marc Toussaint and Lydia E. Kavraki},
 booktitle = {IEEE Robotics and Automation Letters},
 doi = {10.1109/LRA.2023.3282788},
 number = {8},
 pages = {4529--4536},
 title = {Solving Rearrangement Puzzles using Path Defragmentation in Factored State Spaces},
 volume = {8},
 year = {2023}
}

Close


J5.

Scaling Multimodal Planning: Using Experience and Informing Discrete Search
Zachary Kingston, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

Robotic manipulation is inherently continuous, but typically has an underlying discrete structure, such as if an object is grasped. Many problems like these are multi-modal, such as pick-and-place tasks where every object grasp and placement is a mode. Multi-modal problems require finding a sequence of transitions between modes - for example, a particular sequence of object picks and placements. However, many multi-modal planners fail to scale when motion planning is difficult (e.g., in clutter) or the task has a long horizon (e.g., rearrangement). This work presents solutions for multi-modal scalability in both these areas. For motion planning, we present an experience-based planning framework ALEF which reuses experience from similar modes both online and from training data. For task satisfaction, we present a layered planning approach that uses a discrete lead to bias search towards useful mode transitions, informed by weights over mode transitions. Together, these contributions enable multi-modal planners to tackle complex manipulation tasks that were previously infeasible or inefficient, and provide significant improvements in scenes with high-dimensional robots.


Close


@article{kingston2023tro,
 author = {Zachary Kingston and Lydia E. Kavraki},
 doi = {10.1109/TRO.2022.3197080},
 journal = {IEEE Transactions on Robotics},
 number = {1},
 pages = {128--146},
 title = {Scaling Multimodal Planning: Using Experience and Informing Discrete Search},
 volume = {39},
 year = {2023}
}

Close


2021

J4.

MotionBenchMaker: A Tool to Generate and Benchmark Motion Planning Datasets
Constantinos Chamzas, Carlos Quintero-Peña, Zachary Kingston, Andreas Orthey, Daniel Rakita, Michael Gleicher, Marc Toussaint, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video / Talk

Recently, there has been a wealth of development in motion planning for robotic manipulationnew motion planners are continuously proposed, each with its own unique set of strengths and weaknesses. However, evaluating these new planners is challenging, and researchers often create their own ad-hoc problems for benchmarking, which is time-consuming, prone to bias, and does not directly compare against other state-of-the-art planners. We present MotionBenchMaker, an open-source tool to generate benchmarking datasets for realistic robot manipulation problems. MotionBenchMaker is designed to be an extensible, easy-to-use tool that allows users to both generate datasets and benchmark them by comparing motion planning algorithms. Empirically, we show the benefit of using MotionBenchMaker as a tool to procedurally generate datasets which helps in the fair evaluation of planners. We also present a suite of over 40 prefabricated datasets, with 5 different commonly used robots in 8 environments, to serve as a common ground for future motion planning research.


Close


@article{chamzas2021mbm,
 author = {Constantinos Chamzas and Carlos Quintero-Peña and Zachary Kingston and Andreas Orthey and Daniel Rakita and Michael Gleicher and Marc Toussaint and Lydia E. Kavraki},
 doi = {10.1109/LRA.2021.3133603},
 journal = {IEEE Robotics and Automation Letters},
 number = {2},
 pages = {882--889},
 title = {MotionBenchMaker: A Tool to Generate and Benchmark Motion Planning Datasets},
 volume = {7},
 year = {2021},
 youtube = {https://www.youtube.com/watch?v=t96Py0QX0NI}
}

Close


2019

J3.

Exploring Implicit Spaces for Constrained Sampling-Based Planning
Zachary Kingston, Mark Moll, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

We present a review and reformulation of manifold constrained sampling-based motion planning within a unifying framework, IMACS (implicit manifold configuration space). IMACS enables a broad class of motion planners to plan in the presence of manifold constraints, decoupling the choice of motion planning algorithm and method for constraint adherence into orthogonal choices. We show that implicit configuration spaces defined by constraints can be presented to sampling-based planners by addressing two key fundamental primitives, sampling and local planning, and that IMACS preserves theoretical properties of probabilistic completeness and asymptotic optimality through these primitives. Within IMACS, we implement projection- and continuation-based methods for constraint adherence, and demonstrate the framework on a range of planners with both methods in simulated and realistic scenarios. Our results show that the choice of method for constraint adherence depends on many factors and that novel combinations of planners and methods of constraint adherence can be more effective than previous approaches. Our implementation of IMACS is open source within the Open Motion Planning Library and is easily extended for novel planners and constraint spaces.


Close


@article{kingston2019imacs,
 author = {Zachary Kingston and Mark Moll and Lydia E. Kavraki},
 doi = {10.1177/0278364919868530},
 journal = {The International Journal of Robotics Research},
 month = {9},
 number = {10--11},
 pages = {1151--1178},
 title = {Exploring Implicit Spaces for Constrained Sampling-Based Planning},
 volume = {38},
 year = {2019}
}

Close


2018

J2.

An Incremental Constraint-Based Framework for Task and Motion Planning
Neil T. Dantam, Zachary Kingston, Swarat Chaudhuri, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstrations necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved performance and generality compared to a similar, state-of-the-art, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future.


Close


@article{dantam2018tmp,
 author = {Neil T. Dantam and Zachary Kingston and Swarat Chaudhuri and Lydia E. Kavraki},
 doi = {10.1177/0278364918761570},
 journal = {The International Journal of Robotics Research, Special Issue on the 2016 Robotics: Science and Systems Conference},
 number = {10},
 pages = {1134--1151},
 title = {An Incremental Constraint-Based Framework for Task and Motion Planning},
 volume = {37},
 year = {2018}
}

Close


J1.

Sampling-Based Methods for Motion Planning with Constraints
Zachary Kingston, Mark Moll, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Robots with many degrees of freedom (e.g., humanoid robots and mobile manipulators) have increasingly been employed to accomplish realistic tasks in domains such as disaster relief, spacecraft logistics, and home caretaking. Finding feasible motions for these robots autonomously is essential for their operation. Sampling-based motion planning algorithms have been shown to be effective for these high-dimensional systems. However, incorporating task constraints (e.g., keeping a cup level, writing on a board) into the planning process introduces significant challenges. is survey describes the families of methods for sampling-based planning with constraints and places them on a spectrum delineated by their complexity. Constrained sampling-based methods are based upon two core primitive operations: (1) sampling constraint-satisfying configurations and (2) generating constraint-satisfying continuous motion. Although the basics of sampling-based planning are presented for contextual background, the survey focuses on the representation of constraints and sampling-based planners that incorporate constraints.


Close


@article{kingston2018ar,
 author = {Zachary Kingston and Mark Moll and Lydia E. Kavraki},
 doi = {10.1146/annurev-control-060117-105226},
 journal = {Annual Review of Control, Robotics, and Autonomous Systems},
 number = {1},
 pages = {159--185},
 title = {Sampling-Based Methods for Motion Planning with Constraints},
 volume = {1},
 year = {2018}
}

Close


Peer-Reviewed Conference Papers

2024

C19.

Motions in Microseconds via Vectorized Sampling-Based Planning
Wil Thomason, Zachary Kingston, and Lydia E. Kavraki
(Equal Contribution. To Appear.)

Bibtex / Abstract / PDF

Modern sampling-based motion planning algorithms typically take between hundreds of milliseconds to dozens of seconds to find collision-free motions for high degree-of-freedom problems. This paper presents performance improvements of more than 500x over the state-of-the-art, bringing planning times into the range of microseconds and solution rates into the range of kilohertz, without specialized hardware. Our key insight is how to exploit fine-grained parallelism within sampling-based planners, providing generality-preserving algorithmic improvements to any such planner and significantly accelerating critical subroutines, such as forward kinematics and collision checking. We demonstrate our approach over a diverse set of challenging, realistic problems for complex robots ranging from 7 to 14 degrees-of-freedom. Moreover, we show that our approach does not require high-power hardware by also evaluating on a low-power single-board computer. The planning speeds demonstrated are fast enough to reside in the range of control frequencies and open up new avenues of motion planning research.


Close


@inproceedings{thomason2024vamp,
 author = {Wil Thomason and Zachary Kingston and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 title = {Motions in Microseconds via Vectorized Sampling-Based Planning},
 year = {2024}
}

Close


C18.

Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty
Carlos Quintero-Peña, Wil Thomason, Zachary Kingston, Anastasios Kyrillidis, and Lydia E. Kavraki
(To Appear.)

Bibtex / Abstract / PDF

Motion planning under sensing uncertainty is critical for robots in unstructured environments to guarantee safety for both the robot and any nearby humans. Most work on planning under uncertainty does not scale to high-dimensional robots such as manipulators, assumes simplified geometry of the robot or environment, or requires per-object knowledge of noise. Instead, we propose a method that directly models sensor-specific aleatoric uncertainty to find safe motions for high-dimensional systems in complex environments, without exact knowledge of environment geometry. We combine a novel implicit neural model of stochastic signed distance functions with a hierarchical optimization-based motion planner to plan low-risk motions without sacrificing path quality. Our method also explicitly bounds the risk of the path, offering trustworthiness. We empirically validate that our method produces safe motions and accurate risk bounds and is safer than baseline approaches.


Close


@inproceedings{quintero2024impdist,
 author = {Carlos Quintero-Peña and Wil Thomason and Zachary Kingston and Anastasios Kyrillidis and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 title = {Stochastic Implicit Neural Signed Distance Functions for Safe Motion Planning under Sensing Uncertainty},
 year = {2024}
}

Close


C17.

Accelerating long-horizon planning with affordance-directed dynamic grounding of abstract skills
Khen Elimelech, Zachary Kingston, Wil Thomason, Moshe Y. Vardi, and Lydia E. Kavraki
(To Appear.)

Bibtex / Abstract / PDF

Long-horizon task planning is important for robot autonomy, especially as a subroutine for frameworks such as Integrated Task and Motion Planning. However, task planning is computationally challenging and struggles to scale to realistic problem settings. We propose to accelerate task planning over an agent's lifetime by integrating learned abstract skills: a generalizable planning experience encoding introduced in earlier work. In this work, we contribute a practical approach to planning with skills by introducing a novel formalism of planning in a skill-augmented domain. We also introduce and formulate the notion of a skill's affordance, which indicates its predicted benefit to the solution, and use it to guide the planning and skill grounding processes. Together, our observations yield an affordance-directed, lazy-search planning algorithm, which can seamlessly compose skills and actions to solve long-horizon planning problems. We evaluate our planner in an object rearrangement domain, where we demonstrate performance benefits relative to a state-of-the-art task planner.


Close


@inproceedings{elimelech2024skills,
 author = {Khen Elimelech and Zachary Kingston and Wil Thomason and Moshe Y. Vardi and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 title = {Accelerating long-horizon planning with affordance-directed dynamic grounding of abstract skills},
 year = {2024}
}

Close


2023

C16.

Robots as AI Double Agents: Privacy in Motion Planning
Rahul Shome, Zachary Kingston, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Robotics and automation are poised to change the landscape of home and work in the near future. Robots are adept at deliberately moving, sensing, and interacting with their environments. The pervasive use of this technology promises societal and economic payoffs due to its capabilities - conversely, the capabilities of robots to move within and sense the world around them is susceptible to abuse. Robots, unlike typical sensors, are inherently autonomous, active, and deliberate. Such automated agents can become AI double agents liable to violate the privacy of coworkers, privileged spaces, and other stakeholders. In this work we highlight the understudied and inevitable threats to privacy that can be posed by the autonomous, deliberate motions and sensing of robots. We frame the problem within broader sociotechnological questions alongside a comprehensive review. The privacy-aware motion planning problem is formulated in terms of cost functions that can be modified to induce privacy-aware behavior - preserving, agnostic, or violating. Simulated case studies in manipulation and navigation, with altered cost functions, are used to demonstrate how privacy-violating threats can be easily injected, sometimes with only small changes in performance (solution path lengths). Such functionality is already widely available. This preliminary work is meant to lay the foundations for near-future, holistic, interdisciplinary investigations that can address questions surrounding privacy in intelligent robotic behaviors determined by planning algorithms.


Close


@inproceedings{shome2023privacy,
 author = {Rahul Shome and Zachary Kingston and Lydia E. Kavraki},
 booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
 doi = {10.1109/IROS55552.2023.10341460},
 number = {},
 pages = {2861-2868},
 title = {Robots as AI Double Agents: Privacy in Motion Planning},
 volume = {},
 year = {2023}
}

Close


C15.

Object Reconfiguration with Simulation-Derived Feasible Actions
Yiyuan Lee, Wil Thomason, Zachary Kingston, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

3D object reconfiguration encompasses common robot manipulation tasks in which a set of objects must be moved through a series of physically feasible state changes into a desired final configuration. Object reconfiguration is challenging to solve in general, as it requires efficient reasoning about environment physics that determine action validity. This information is typically manually encoded in an explicit transition system. Constructing these explicit encodings is tedious and error-prone, and is often a bottleneck for planner use. In this work, we explore embedding a physics simulator within a motion planner to implicitly discover and specify the valid actions from any state, removing the need for manual specification of action semantics. Our experiments demonstrate that the resulting simulation-based planner can effectively produce physically valid rearrangement trajectories for a range of 3D object reconfiguration problems without requiring more than an environment description and start and goal arrangements.


Close


@inproceedings{lee2023physics,
 author = {Yiyuan Lee and Wil Thomason and Zachary Kingston and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA48891.2023.10160377},
 number = {},
 pages = {8104-8111},
 title = {Object Reconfiguration with Simulation-Derived Feasible Actions},
 volume = {},
 year = {2023}
}

Close


C14.

Optimal Grasps and Placements for Task and Motion Planning in Clutter
Carlos Quintero-Peña, Zachary Kingston, Tianyang Pan, Rahul Shome, Anastasios Kyrillidis, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Many methods that solve robot planning problems, such as task and motion planners, employ discrete symbolic search to find sequences of valid symbolic actions that are grounded with motion planning. Much of the efficacy of these planners lies in this grounding—bad placement and grasp choices can lead to inefficient planning when a problem has many geometric constraints. Moreover, grounding methods such as naı̈ve sampling often fail to find appropriate values for these choices in the presence of clutter. Towards efficient task and motion planning, we present a novel optimization-based approach for grounding to solve cluttered problems that have many constraints that arise from geometry. Our approach finds an optimal grounding and can provide feedback to discrete search for more effective planning. We demonstrate our method against baseline methods in complex simulated environments.


Close


@inproceedings{quintero2023optimal,
 author = {Carlos Quintero-Peña and Zachary Kingston and Tianyang Pan and Rahul Shome and Anastasios Kyrillidis and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA48891.2023.10161455},
 number = {},
 pages = {3707-3713},
 title = {Optimal Grasps and Placements for Task and Motion Planning in Clutter},
 volume = {},
 year = {2023}
}

Close


2022

C13.

Robowflex: Robot Motion Planning with MoveIt Made Easy
Zachary Kingston, and Lydia E. Kavraki
(Nominated for Best Paper for Industrial Robotics Research for Practicality.)

Bibtex / Abstract / PDF / Publisher / Video / Talk

Robowflex is a software library for robot motion planning in industrial and research applications, leveraging the popular MoveIt library and Robot Operating System (ROS) middleware. Robowflex takes advantage of the ease of motion planning with MoveIt while providing an augmented API to craft and manipulate motion planning queries within a single program. Robowflex's high-level API simplifies many common use-cases while still providing access to the underlying MoveIt library. Robowflex is particularly useful for 1) developing new motion planners, 2) evaluation of motion planners, and 3) complex problems that use motion planning (e.g., task and motion planning). Robowflex also provides visualization capabilities, integrations to other robotics libraries (e.g., DART and Tesseract), and is complimentary to many other robotics packages. With our library, the user does not need to be an expert at ROS or MoveIt in order to set up motion planning queries, extract information from results, and directly interface with a variety of software components. We provide a few example use-cases that demonstrate its efficacy.


Close


@inproceedings{kingston2022robowflex,
 author = {Zachary Kingston and Lydia E. Kavraki},
 booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
 doi = {10.1109/IROS47612.2022.9981698},
 pages = {3108--3114},
 title = {Robowflex: Robot Motion Planning with MoveIt Made Easy},
 year = {2022}
}

Close


2021

C12.

Using Experience to Improve Constrained Planning on Foliations for Multi-Modal Problems
Zachary Kingston, Constantinos Chamzas, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Many robotic manipulation problems are multi-modal—they consist of a discrete set of mode families (e.g., whether an object is grasped or placed) each with a continuum of parameters (e.g., where exactly an object is grasped). Core to these problems is solving single-mode motion plans, i.e., given a mode from a mode family (e.g., a specific grasp), find a feasible motion to transition to the next desired mode. Many planners for such problems have been proposed, but complex manipulation plans may require prohibitively long computation times due to the difficulty of solving these underlying single-mode problems. It has been shown that using experience from similar planning queries can significantly improve the efficiency of motion planning. However, even though modes from the same family are similar, they impose different constraints on the planning problem, and thus experience gained in one mode cannot be directly applied to another. We present a new experience-based framework, ALEF , for such multi-modal planning problems. ALEF learns using paths from single-mode problems from a mode family, and applies this experience to novel modes from the same family. We evaluate ALEF on a variety of challenging problems and show a significant improvement in the efficiency of sampling-based planners both in isolation and within a multi-modal manipulation planner.


Close


@inproceedings{kingston2021,
 author = {Zachary Kingston and Constantinos Chamzas and Lydia E. Kavraki},
 booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
 doi = {10.1109/IROS51168.2021.9636236},
 pages = {6922--6927},
 title = {Using Experience to Improve Constrained Planning on Foliations for Multi-Modal Problems},
 year = {2021}
}

Close


C11.

HyperPlan: A Framework for Motion Planning Algorithm Selection and Parameter Optimization
Mark Moll, Constantinos Chamzas, Zachary Kingston, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher

Over the years, many motion planning algorithms have been proposed. It is often unclear which algorithm might be best suited for a particular class of problems. The problem is compounded by the fact that algorithm performance can be highly dependent on parameter settings. This paper shows that hyperparameter optimization is an effective tool in both algorithm selection and parameter tuning over a given set of motion planning problems. We present different loss functions for optimization that capture different notions of optimality. The approach is evaluated on a broad range of scenes using two different manipulators, a Fetch and a Baxter. We show that optimized planning algorithm performance significantly improves upon baseline performance and generalizes broadly in the sense that performance improvements carry over to problems that are very different from the ones considered during optimization.


Close


@inproceedings{moll2021hyper,
 author = {Mark Moll and Constantinos Chamzas and Zachary Kingston and Lydia E. Kavraki},
 booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
 doi = {10.1109/IROS51168.2021.9636651},
 pages = {2511-2518},
 title = {HyperPlan: A Framework for Motion Planning Algorithm Selection and Parameter Optimization},
 year = {2021}
}

Close


C10.

Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High Dimensions
Constantinos Chamzas, Zachary Kingston, Carlos Quintero-Peña, Anshumali Shrivastava, and Lydia E. Kavraki
(Nominated for Best Paper in Cognitive Robotics.)

Bibtex / Abstract / PDF / Publisher / Video

Earlier work has shown that reusing experience from prior motion planning problems can improve the efficiency of similar, future motion planning queries. However, for robots with many degrees-of-freedom, these methods exhibit poor generalization across different environments and often require large datasets that are impractical to gather. We present SPARK and FLAME, two experience-based frameworks for sampling- based planning applicable to complex manipulators in 3D environments. Both combine samplers associated with features from a workspace decomposition into a global biased sampling distribution. SPARK decomposes the environment based on exact geometry while FLAME is more general, and uses an octree-based decomposition obtained from sensor data. We demonstrate the effectiveness of SPARK and FLAME on a real and simulated Fetch robot tasked with challenging pick-and-place manipulation problems. Our approaches can be trained incrementally and significantly improve performance with only a handful of examples, generalizing better over diverse tasks and environments as compared to prior approaches.


Close


@inproceedings{chamzas2021flame,
 author = {Constantinos Chamzas and Zachary Kingston and Carlos Quintero-Peña and Anshumali Shrivastava and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA48506.2021.9561104},
 pages = {1283--1289},
 title = {Learning Sampling Distributions Using Local 3D Workspace Decompositions for Motion Planning in High Dimensions},
 year = {2021},
 youtube = {https://www.youtube.com/watch?v=cH4_lIjjs58}
}

Close


C9.

Finite Horizon Synthesis for Probabilistic Manipulation Domains
Andrew M. Wells, Zachary Kingston, Morteza Lahijanian, Lydia E. Kavraki, and Moshe Y. Vardi

Bibtex / Abstract / PDF / Publisher

Robots have begun operating and collaborating with humans in industrial and social settings. This collaboration introduces challenges: the robot must plan while taking the human’s actions into account. In prior work, the problem was posed as a 2-player deterministic game, with a limited number of human moves. The limit on human moves is unintuitive, and in many settings determinism is undesirable. In this paper, we present a novel planning method for collaborative human-robot manipulation tasks via probabilistic synthesis. We introduce a probabilistic manipulation domain that captures the interaction by allowing for both robot and human actions with states that represent the configurations of the objects in the workspace. The task is specified using Linear Temporal Logic over finite traces (LTLf). We then transform our manipulation domain into a Markov Decision Process (MDP) and synthesize an optimal policy to satisfy the specification on this MDP. We present two novel contributions: a formalization of probabilistic manipulation domains allowing us to apply existing techniques and a comparison of different encodings of these domains. Our framework is validated on a physical UR5 robot.


Close


@inproceedings{wells2021icra,
 author = {Andrew M. Wells and Zachary Kingston and Morteza Lahijanian and Lydia E. Kavraki and Moshe Y. Vardi},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA48506.2021.9561297},
 pages = {6336--6342},
 title = {Finite Horizon Synthesis for Probabilistic Manipulation Domains},
 year = {2021}
}

Close


2020

C8.

Informing Multi-Modal Planning with Synergistic Discrete Leads
Zachary Kingston, Andrew M. Wells, Mark Moll, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

Robotic manipulation problems are inherently continuous, but typically have underlying discrete structure, e.g., whether or not an object is grasped. This means many problems are multi-modal and in particular have a continuous infinity of modes. For example, in a pick-and-place manipulation domain, every grasp and placement of an object is a mode. Usually manipulation problems require the robot to transition into different modes, e.g., going from a mode with an object placed to another mode with the object grasped. To successfully find a manipulation plan, a planner must find a sequence of valid single-mode motions as well as valid transitions between these modes. Many manipulation planners have been proposed to solve tasks with multi-modal structure. However, these methods require mode-specific planners and fail to scale to very cluttered environments or to tasks that require long sequences of transitions. This paper presents a general layered planning approach to multi-modal planning that uses a discrete "lead" to bias search towards useful mode transitions. The difficulty of achieving specific mode transitions is captured online and used to bias search towards more promising sequences of modes. We demonstrate our planner on complex scenes and show that significant performance improvements are tied to both our discrete "lead" and our continuous representation.


Close


@inproceedings{kingston2020leads,
 author = {Zachary Kingston and Andrew M. Wells and Mark Moll and Lydia E. Kavraki},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA40945.2020.9197545},
 pages = {3199--3205},
 title = {Informing Multi-Modal Planning with Synergistic Discrete Leads},
 year = {2020}
}

Close


C7.

Decoupling Constraints from Sampling-Based Planners
Zachary Kingston, Mark Moll, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

We present a general unifying framework for sampling-based motion planning under kinematic task constraints which enables a broad class of planners to compute plans that satisfy a given constraint function that encodes, e.g., loop closure, balance, and end-effector constraints. The framework decouples a planner’s method for exploration from constraint satisfaction by representing the implicit configuration space defined by a constraint function. We emulate three constraint satisfaction methodologies from the literature, and demonstrate the framework with a range of planners utilizing these constraint methodologies. Our results show that the appropriate choice of constrained satisfaction methodology depends on many factors, e.g., the dimension of the configuration space and implicit constraint manifold, and number of obstacles. Furthermore, we show that novel combinations of planners and constraint satisfaction methodologies can be more effective than previous approaches. The framework is also easily extended for novel planners and constraint spaces.


Close


@incollection{kingston2020isrr,
 address = {Cham},
 author = {Zachary Kingston and Mark Moll and Lydia E. Kavraki},
 booktitle = {Robotics Research},
 doi = {10.1007/978-3-030-28619-4_62},
 editor = {Amato, N. M. and Hager, G. and Thomas, S. and Torres-Torriti, M.},
 isbn = {978-3-030-28619-4},
 pages = {913--928},
 publisher = {Springer International Publishing},
 title = {Decoupling Constraints from Sampling-Based Planners},
 year = {2020}
}

Close


2018

C6.

Distributed Object Characterization with Local Sensing by a Multi-Robot System
Golnaz Habibi, Sándor P. Fekete, Zachary Kingston, and James McLurkin

Bibtex / Abstract / PDF / Publisher / Video

This paper presents two distributed algorithms for enabling a swarm of robots with local sensing and local coordinates to estimate the dimensions and orientation of an unknown complex polygonal object, ie, its minimum and maximum width and its main axis. Our first approach is based on a robust heuristic of distributed Principal Component Analysis (DPCA), while the second is based on turning the idea of Rotating Calipers into a distributed algorithm (DRC). We simulate DRC and DPCA methods and test DPCA on real robots. The result show our algorithms successfully estimate the dimension and orientation of convex or concave objects with a reasonable error in the presence of noisy data.


Close


@incollection{habibi2018dars,
 author = {Golnaz Habibi and S{\'a}ndor P. Fekete and Zachary Kingston and James McLurkin},
 booktitle = {Distributed Autonomous Robotic Systems},
 doi = {10.1007/978-3-319-73008-0_15},
 editor = {Roderich Gro{\ss} and Andreas Kolling and Spring Berman and Emilio Frazzoli and Alcherio Martinoli and Fumitoshi Matsuno and Melvin Gauci},
 pages = {205--218},
 publisher = {Springer Proceedings in Advanced Robotics},
 title = {Distributed Object Characterization with Local Sensing by a Multi-Robot System},
 volume = {6},
 year = {2018}
}

Close


2017

C5.

Robonaut 2 and You: Specifying and Executing Complex Operations
William Baker, Zachary Kingston, Mark Moll, Julia Badger, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

Crew time is a precious resource due to the expense of trained human operators in space. Efficient caretaker robots could lessen the manual labor load required by frequent vehicular and life support maintenance tasks, freeing astronaut time for scientific mission objectives. Humanoid robots can fluidly exist alongside human counterparts due to their form, but they are complex and high-dimensional platforms. This paper describes a system that human operators can use to maneuver Robonaut 2 (R2), a dexterous humanoid robot developed by NASA to research co-robotic applications. The system includes a specification of constraints used to describe operations, and the supporting planning framework that solves constrained problems on R2 at interactive speeds. The paper is developed in reference to an illustrative, typical example of an operation R2 performs to highlight the challenges inherent to the problems R2 must face. Finally, the interface and planner is validated through a case-study using the guiding example on the physical robot in a simulated microgravity environment. This work reveals the complexity of employing humanoid caretaker robots and suggest solutions that are broadly applicable.


Close


@inproceedings{baker2017r2,
 address = {Austin, TX},
 author = {William Baker and Zachary Kingston and Mark Moll and Julia Badger and Lydia E. Kavraki},
 booktitle = {IEEE Workshop on Advanced Robotics and its Social Impacts},
 doi = {10.1109/ARSO.2017.8025204},
 month = {March},
 pages = {1--8},
 title = {Robonaut 2 and You: Specifying and Executing Complex Operations},
 year = {2017}
}

Close


2016

C4.

Incremental Task and Motion Planning: A Constraint-Based Approach
Neil T. Dantam, Zachary Kingston, Swarat Chaudhuri, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstractions necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved performance and generality compared to a similar, state-of-the-art, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future.


Close


@inproceedings{dantam2016tmp,
 address = {Ann Arbor, MI},
 author = {Neil T. Dantam and Zachary Kingston and Swarat Chaudhuri and Lydia E. Kavraki},
 booktitle = {Robotics: Science and Systems},
 doi = {10.15607/RSS.2016.XII.002},
 month = {June},
 title = {Incremental Task and Motion Planning: A Constraint-Based Approach},
 year = {2016}
}

Close


2015

C3.

Kinematically Constrained Workspace Control via Linear Optimization
Zachary Kingston, Neil T. Dantam, and Lydia E. Kavraki

Bibtex / Abstract / PDF / Publisher / Video

We present a method for Cartesian workspace control of a robot manipulator that enforces joint-level acceleration, velocity, and position constraints using linear optimization. This method is robust to kinematic singularities. On redundant manipulators, we avoid poor configurations near joint limits by including a maximum permissible velocity term to center each joint within its limits. Compared to the baseline Jacobian damped least-squares method of workspace control, this new approach honors kinematic limits, ensuring physically realizable control inputs and providing smoother motion of the robot. We demonstrate our method on simulated redundant and non-redundant manipulators and implement it on the physical 7-degree-of-freedom Baxter manipulator. We provide our control software under a permissive license.


Close


@inproceedings{kingston2015lc3,
 author = {Zachary Kingston and Neil T. Dantam and Lydia E. Kavraki},
 booktitle = {IEEE-RAS International Conference on Humanoid Robots},
 doi = {10.1109/HUMANOIDS.2015.7363455},
 month = {Nov},
 pages = {758--764},
 title = {Kinematically Constrained Workspace Control via Linear Optimization},
 year = {2015}
}

Close


C2.

Pipelined Consensus for Global State Estimation in Multi-Agent Systems
Golnaz Habibi, Zachary Kingston, Zijian Wang, Mac Schwager, and James McLurkin

Bibtex / Abstract / PDF / Publisher / Publisher

This paper presents pipelined consensus, an extension of pair-wise gossip-based consensus, for multi-agent systems using mesh networks. Each agent starts a new consensus in each round of gossiping, and stores the intermediate results for the previous k consensus in a pipeline message. After k rounds of gossiping, the results of the first consensus are ready. The pipeline keeps each consensus independent, so any errors only persist for k rounds. This makes pipelined consensus robust to many real-world problems that other algorithms cannot handle, including message loss, changes in network topology, sensor variance, and changes in agent population. The algorithm is fully distributed and self-stabilizing, and uses a communication message of fixed size. We demonstrate the efficiency of pipelined consensus in two scenarios: computing mean sensor values in a distributed sensor network, and computing a centroid estimate in a multi-robot system. We provide extensive simulation results, and real-world experiments with up to 24 agents. The algorithm produces accurate results, and handles all of the disturbances mentioned above.


Close


@incollection{habibi2015aamas,
 author = {Golnaz Habibi and Zachary Kingston and Zijian Wang and Mac Schwager and James McLurkin},
 booktitle = {Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems},
 doi = {10.5555/2772879.2773320},
 isbn = {9781450334136},
 pages = {1315--1323},
 publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
 title = {Pipelined Consensus for Global State Estimation in Multi-Agent Systems},
 url = {http://dl.acm.org/citation.cfm?id=2772879.2773320},
 year = {2015}
}

Close


C1.

Distributed Centroid Estimation and Motion Controllers for Collective Transport by Multi-Robot Systems
Golnaz Habibi, Zachary Kingston, William Xie, Mathew Jellins, and James McLurkin

Bibtex / Abstract / PDF / Publisher / Video

This paper presents four distributed motion controllers to enable a group of robots to collectively transport an object towards a guide robot. These controllers include: rotation around a pivot robot, rotation in-place around an estimated centroid of the object, translation, and a combined motion of rotation and translation in which each manipulating robot follows a trochoid path. Three of these controllers require an estimate of the centroid of the object, to use as the axis of rotation. Assuming the object is surrounded by manipulator robots, we approximate the centroid of the object by measuring the centroid of the manipulating robots. Our algorithms and controllers are fully distributed and robust to changes in network topology, robot population, and sensor error. We tested all of the algorithms in real-world environments with 9 robots, and show that the error of the centroid estimation is low, and that all four controllers produce reliable motion of the object.


Close


@inproceedings{habibi2015icra,
 author = {Golnaz Habibi and Zachary Kingston and William Xie and Mathew Jellins and James McLurkin},
 booktitle = {IEEE International Conference on Robotics and Automation},
 doi = {10.1109/ICRA.2015.7139356},
 pages = {1282--1288},
 title = {Distributed Centroid Estimation and Motion Controllers for Collective Transport by Multi-Robot Systems},
 year = {2015}
}

Close


Book Chapters

2020

B1.

Planning Under Manifold Constraints, Encyclopedia of Robotics
Zachary Kingston

Bibtex / Publisher / Publisher

@inbook{kingston2020book,
 address = {Berlin, Heidelberg},
 author = {Zachary Kingston},
 chapter = {Planning Under Manifold Constraints},
 doi = {10.1007/978-3-642-41610-1_174-1},
 editor = {Marcelo H. Ang and Oussama Khatib and Bruno Siciliano},
 isbn = {978-3-642-41610-1},
 pages = {1--9},
 publisher = {Springer Berlin Heidelberg},
 title = {Encyclopedia of Robotics},
 url = {https://doi.org/10.1007/978-3-642-41610-1_174-1},
 year = {2020}
}

Close


A picture of me from February 2020.

zakrice.edu