问题1:价值迭代

问题描述

实验来源于伯克利CS188\
项目已将问题形式化为马尔可夫决策过程(Markov decision process,MDP),即MDP\
在 valueIterationAgents.py 中,ValueIterationAgent 类构造方法接收MDP(无折扣因子)及折扣因子\
我们需要做的是实现以下方法包括:

  • runValueIteration: 执行价值迭代
  • computeActionFromValues(state):根据self.values给出的值函数计算最佳行动
  • computeQValueFromValues(state, action):返回根据self.values给出的值函数给出的(状态, 动作)对的Q值

    原理介绍

马尔可夫决策过程由五元组$$构成,项目的MDP已经提供了以下接口:

  • mdp.getStates():获取MDP的所有状态
  • mdp.getPossibleActions(state):当状态确定时,可能发生的所有动作
  • mdp.getTransitionStatesAndProbs(state, action):当状态和动作确定时,所有可能的下一状态及其发生的概率
  • mdp.getReward(state, action, nextState):当状态,动作及下一状态确定时,得到的即时回报
  • mdp.isTerminal(state):判断状态是否为终止状态

价值迭代算法\
对每个状态$s$, 找到动作$a$使动作价值最大为$Q_m$, 更新s对应的状态价值$V=Q_m$及对应的策略为$a$

代码实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 价值迭代模板
def runValueIteration(self):
delta = 0
while 1:
new_values = util.Counter() # new values after iteration
for state in self.mdp.getStates():
max_value = -float("inf")
for action in self.mdp.getPossibleActions(state):
value = self.computeQValueFromValues(state=state, action=action)
max_value = max(value, max_value)
new_values[state] = max_value if max_value != -float("inf") else 0
delta = max(abs(new_values[state] - self.values[state]), delta))
self.values = new_values # update values, also policy
if self.theta < delta:
break
# 实验要求命令行输入迭代次数 -i <iterations>, 故修改为
def runValueIteration(self):
for i in range(self.iterations):
new_values = util.Counter() # new values after iteration
for state in self.mdp.getStates():
max_value = -float("inf")
for action in self.mdp.getPossibleActions(state):
value = self.computeQValueFromValues(state=state, action=action)
max_value = max(value, max_value)
new_values[state] = max_value if max_value != -float("inf") else 0
delta = max(abs(new_values[state] - self.values[state]), delta))
self.values = new_values # update values, also policy

计算动作价值Q

1
2
3
4
5
6
7
8
9
10
11
12
13
# 根据公式应该使用的函数模板
def computeQValueFromValues(self, state, action):
sum = self.mdp.getReward(state, action)
for nextState, prob in self.mdp.getTransitionStatesAndProbs(state=state, action=action):
sum += self.discount * prob * self.getValue(nextState)
return sum
# 由于mdp提供的getReward需要三个参数state, action, nextState, 故修改为
def computeQValueFromValues(self, state, action):
sum = 0
for nextState, prob in self.mdp.getTransitionStatesAndProbs(state=state, action=action):
sum += prob *
(self.mdp.getReward(state, action) + self.discount * self.getValue(nextState) )
return sum

根据state得到最好的action
1
2
3
4
5
6
7
8
9
10
def computeActionFromValues(self, state):
max_value = -float("inf")
best_action = None
for action in self.mdp.getPossibleActions(state):
# 与价值迭代中求Q值最大的action类似
value = self.computeQValueFromValues(state=state, action=action)
if value > max_value:
best_action = action
max_value = value
return best_action

问题2:过桥分析

由题,noise表示智能体在执行操作时以意外的后继状态结束的频率,有一定概率不遵循我们计算得到的策略。故将noise设置为0.0,即可通过测试。

1
2
3
4
def question2():
answerDiscount = 0.9
answerNoise = 0.0
return answerDiscount, answerNoise