Skip to content

JuliaPOMDP/QMDP.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ed79ed3 · Jun 13, 2024
Sep 20, 2021
Jan 30, 2023
Jan 30, 2023
Jun 25, 2015
Jun 13, 2024
Apr 26, 2023

Repository files navigation

QMDP

Build Status codecov

This Julia package implements the QMDP approximate solver for POMDP/MDP planning. The QMDP solver is documented in:

Installation

import Pkg
Pkg.add("QMDP")

Usage

using QMDP
pomdp = MyPOMDP() # initialize POMDP

# initialize the solver
# key-word args are the maximum number of iterations the solver will run for, and the Bellman tolerance
solver = QMDPSolver(max_iterations=20,
                    belres=1e-3,
                    verbose=true
                   ) 

# run the solver
policy = solve(solver, pomdp)

To compute optimal action, define a belief with the distribution interface, or use the DiscreteBelief provided in POMDPTools.

using POMDPTools
b = uniform_belief(pomdp) # initialize to a uniform belief
a = action(policy, b)

In order to use the efficient SparseValueIterationSolver from DiscreteValueIteration.jl, you can directly pass the solver to the QMDPSolver constructor as follows:

using QMDP, DiscreteValueIteration
pomdp = MyPOMDP()

solver = QMDPSolver(SparseValueIterationSolver(max_iterations=20, verbose=true))

policy = solve(solver, pomdp)