| {: , : , : , : , : , : Agents as probabilistic programs\One-shot decision problems, expected utility, softmax choice and Monty Hall.\random\planning_as\eat at Italian restaurant\eat at French restaurant\initialState\planning as inference\backwards chaining\initialState\Italian\soft-max\bad\good\spectacular\pizza\hard\soft\all tails\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : Cognitive biases and bounded rationality\[Reasoning about Agents](/chapters/4-reasoning-about-agents.html)\/assets/img/table_chapter5_intro.png\/assets/img/table_chapter5_intro.png\table\width: 650px;\Optimality\Are optimal models of decision making used?\key tasks\cognitive biases\lotteries\greedy\myopic\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : /assets/img/tic-tac-toe-game-1.svg\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : formalization\[Reasoning about Agents](/chapters/4-reasoning-about-agents)\/chapters/4-reasoning-about-agents.html#pomdpDefine\/chapters/4-reasoning-about-agents.html#pomdpInfer\/chapters/4-reasoning-about-agents.html#pomdpInfer\/chapters/5b-time-inconsistency.html#procrastination\/assets/img/procrastination_mdp.png\diagram\width: 650px;\wait\wait\wait\work\work\wait_state\reward_state\wait\work\relax\wait_state\wait\work\relax\reward_state\reward_state\wait\reward_state\wait_state\reward_state\work\wait\work\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\wait\loc\wait_state\waitSteps\timeLeft\terminateAfterAction\work\loc\reward_state\waitSteps\timeLeft\terminateAfterAction\relax\[Bounded Agents](/chapters/5c-myopia)\look ahead\/assets/img/5c-irl-bandit-diagram.png\diagram\width: 400px;\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : MDPs and Gridworld in WebPPL\West\East\time cost\West\West\up\right\right\up\right\stochasticity\pulls themself together\backtrack\right\up\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : myopically\reward-myopic agent\reward shaping\cutoff\bound\/assets/img/5b-greedy-bandit.png\diagram\width: 600px;\/assets/img/5b-greedy-bandit-2.png\diagram\width: 400px;\Agent's first 20 actions (during exploration phase): \\n\" + \n map(second,trajectory.slice(0,20)));\n\nvar averageUtility = listMean(map(getUtility, map(first,trajectory)));\nprint('Arm2 is best arm and has expected utility 1.\\n' + \n 'So ideal performance gives average score of: 1 \\n' + \n 'The average score over 40 trials for rewardMyopic agent: ' + \n averageUtility);\n~~~~\n\n\n-------\n\n## Myopic Updating: the basic idea\n\nThe Reward-myopic agent ignores rewards that occur after its myopic cutoff $$C_g$$. By contrast, an \"Update-myopic agent\", takes into account all future rewards but ignores the value of belief updates that occur after a cutoff. Concretely, the agent at time $$t=0$$ assumes they can only *explore* (i.e. update beliefs from observations) up to some cutoff point $$C_m$$ steps into the future, after which they just exploit without updating beliefs. In reality, the agent continues to update after time $$t=C_m$$. The Update-myopic agent, like the Naive hyperbolic discounter, has an incorrect model of their future self.\n\nMyopic updating is optimal for certain special cases of Bandits and has good performance on Bandits in general refp:frazier2008knowledge. It also provides a good fit to human performance in Bernoulli Bandits refp:zhang2013forgetful.\n\n### Myopic Updating: applications and limitations\n\nMyopic Updating has been studied in Machine Learning refp:gonzalez2015glasses and Operations Research refp:ryzhov2012knowledge. In most cases, the cutoff point $$C_m$$ after which the agent assumes himself to exploit is set to $$C_m=1$$. This results in a scalable, analytically tractable optimization problem: pull the arm that maximizes the expected value of future exploitation given you pulled that arm. This \"future exploitation\" means that you pick the arm that is best in expectation for the rest of time.\n\nWe've presented Bandit problems with a finite number of uncorrelated arms. Myopic Updating also works for generalized Bandit Problems: e.g. when rewards are correlated or continuous and in the setting of \ where instead of a fixed number of arms the goal is to optimize a high-dimensional real-valued function. \n\nMyopic Updating does not work well for POMDPs in general. Suppose you are looking for a good restaurant in a foreign city. A good strategy is to walk to a busy street and then find the busiest restaurant. If reaching the busy street takes longer than the myopic cutoff $$C_m$$, then an Update-myopic agent won't see value in this plan. We present a concrete example of this problem below (\"Restaurant Search\"). This example highlights a way in which Bandit problems are an especially simple POMDP. In a Bandit problem, every aspect of the unknown latent state can be queried at any timestep (by pulling the appropriate arm). So even the Myopic Agent with $$C_m=1$$ is sensitive to the information value of every possible observation that the POMDP can yield[^selfmodel].\n\n[^selfmodel]: The Update-myopic agent incorrectly models his future self, by assuming he ceases to update after cutoff point $$C_m$$. This incorrect \"self-modeling\" is also a property of model-free RL agents. For example, a Q-learner's estimation of expected utilities for states ignores the fact that the Q-learner will randomly explore with some probability. SARSA, on the other hand, does take its random exploration into account when computing this estimate. But it doesn't model the way in which its future exploration behavior will make certain actions useful in the present (as in the example of finding a restaurant in a foreign city).\n\n### Myopic Updating: formal model\nMyopic Updating only makes sense in the context of an agent that is capable of learning from observations (i.e. in the POMDP rather than MDP setting). So our goal is to generalize our agent model for solving POMDPs to a Myopic Updating with $$C_m \\in [1,\\infty]$$.\n\n**Exercise:** Before reading on, modify the equations defining the [POMDP agent](/chapters/3c-pomdp) in order to generalize the agent model to include Myopic Updating. The optimal POMDP agent will be the special case when $$C_m=\\infty$$.\n\n------------\n\nTo extend the POMDP agent to the Update-myopic agent, we use the idea of *delays* from the previous chapter. These delays are not used to evaluate future rewards (as any discounting agent would use them). They are used to determine how future actions are simulated. If the future action occurs when delay $$d$$ exceeds cutoff point $$C_m$$, then the simulated future self does not do a belief update before taking the action. (This makes the Update-myopic agent analogous to the Naive agent: both simulate the future action by projecting the wrong delay value onto their future self). \n\nWe retain the <a href=\"/chapters/3c-pomdp.html#notation\">notation</a> from the definition of the POMDP agent and skip directly to the equation for the expected utility of a state, which we modify for the Update-myopic agent with cutoff point $$C_m \\in [1,\\infty]$$:\n\n$$\nEU_{b}[s,a,d] = U(s,a) + \\mathbb{E}_{s',o,a'}(EU_{b'}[s',a'_{b'},d+1])\n$$\n\nwhere:\n\n- $$s' \\sim T(s,a)$$ and $$o \\sim O(s',a)$$\n\n- $$a'_{b'}$$ is the softmax action the agent takes given new belief $$b'$$\n\n- the new belief state $$b'$$ is defined as:\n\n$$\nb'(s') \\propto I_{C_m}(s',a,o,d)\\sum_{s \\in S}{T(s,a,s')b(s)}\n$$\n\n<!-- problem with < sign in latex math-->\nwhere $$I_{C_m}(s',a,o,d) = O(s',a,o)$$ if $$d$$ < $$C_m$$ and $$I_{C_m}(s',a,o,d) = 1$$ otherwise.\n\nThe key change from POMDP agent is the definition of $$b'$$. The Update-myopic agent assumes his future self (after the cutoff $$C_m$$) updates only on his last action $$a$$ and not on observation $$o$$. For example, in a deterministic Gridworld the future self would keep track of his locations (as his location depends deterministically on his actions) but wouldn't update his belief about hidden states. \n\nThe implementation of the Update-myopic agent in WebPPL is a direct translation of the definition provided above.\n\n>**Exercise:** Modify the code for the POMDP agent to represent an Update-myopic agent. See this <a href=\>codebox</a> or this library [script](https://github.com/agentmodels/webppl-agents/blob/master/src/agents/makePOMDPAgent.wppl).\n\n\n### Myopic Updating for Bandits\n\nThe Update-myopic agent performs well on a variety of Bandit problems. The following codeboxes compare the Update-myopic agent to the Optimal POMDP agent on binary, two-arm Bandits (see the specific example in Figure 3). <!--TODO: add statement about equivalent performance. -->\n\n<img src=\ alt=\ style=\/>\n\n>**Figure 3**: Bandit problem. The agent's prior includes two hypotheses for the rewards of each arm, with the prior probability of each labeled to the left and right of the boxes. The priors on each arm are independent and so there are four hypotheses overall. Boxes with actual rewards have a bold border. \n<br>\n\n<!-- myopic_bandit_performance -->\n~~~~\n// Helper functions for Bandits:\n///fold:\n\n// HELPERS FOR CONSTRUCTING AGENT\n\nvar baseParams = {\n alpha: 1000,\n noDelays: false,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: 1 },\n discount: 0\n};\n\nvar getParams = function(agentPrior) {\n var params = extend(baseParams, { priorBelief: agentPrior });\n return extend(params);\n};\n\nvar getAgentPrior = function(numberOfTrials, priorArm0, priorArm1) {\n return Infer({ model() {\n var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }});\n};\n\n// HELPERS FOR CONSTRUCTING WORLD\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// Construct Bandit POMDP\nvar getBandit = function(numberOfTrials){\n return makeBanditPOMDP({\n numberOfArms: 2,\n\tarmToPrizeDist: { 0: probably0Dist, 1: probably1Dist },\n\tnumberOfTrials: numberOfTrials,\n\tnumericalPrizes: true\n });\n};\n\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n\n// Get score for a single episode of bandits\nvar score = function(out) {\n return listMean(map(getUtility, out));\n};\n///\n\n// Agent prior on arm rewards\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// True latentState:\n// arm0 is probably0Dist, arm1 is probably1Dist (and so is better)\n\n// Agent prior on arms: arm1 (better arm) has higher EV\nvar priorArm0 = function() {\n return categorical([0.5, 0.5], [probably1Dist, probably0Dist]);\n};\nvar priorArm1 = function(){\n return categorical([0.6, 0.4], [probably1Dist, probably0Dist]);\n};\n\n\nvar runAgent = function(numberOfTrials, optimal) {\n // Construct world and agents\n var bandit = getBandit(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n var prior = getAgentPrior(numberOfTrials, priorArm0, priorArm1);\n var agentParams = getParams(prior);\n\n var agent = makeBanditAgent(agentParams, bandit, \n optimal ? 'belief' : 'beliefDelay');\n\n return score(simulatePOMDP(startState, world, agent, 'states')); \n};\n\n// Run each agent 10 times and take average of scores\nvar means = map(function(optimal) {\n var scores = repeat(10, function(){ return runAgent(5,optimal); });\n var st = optimal ? 'Optimal: ' : 'Update-Myopic: ';\n print(st + 'Mean scores on 10 repeats of 5-trial bandits\\n' + scores);\n return listMean(scores);\n }, [true, false]);\n \nprint('Overall means for [Optimal,Update-Myopic]: ' + means);\n~~~~\n\n>**Exercise**: The above codebox shows that performance for the two agents is similar. Try varying the priors and the `armToPrizeDist` and verify that performance remains similar. How would you provide stronger empirical evidence that the two algorithms are equivalent for this problem?\n\nThe following codebox computes the runtime for Update-myopic and Optimal agents as a function of the number of Bandit trials. (This takes a while to run.) We see that the Update-myopic agent has better scaling even on a small number of trials. Note that neither agent has been optimized for Bandit problems.\n\n>**Exercise:** Think of ways to optimize the Update-myopic agent with $$C_m=1$$ for binary Bandit problems.\n\n<!-- myopic_bandit_scaling -->\n~~~~\n///fold: Similar helper functions as above codebox\n\n// HELPERS FOR CONSTRUCTING AGENT\n\nvar baseParams = {\n alpha: 1000,\n noDelays: false,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: 1 },\n discount: 0\n};\n\nvar getParams = function(agentPrior){\n var params = extend(baseParams, { priorBelief: agentPrior });\n return extend(params);\n};\n\nvar getAgentPrior = function(numberOfTrials, priorArm0, priorArm1){\n return Infer({ model() {\n var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }});\n};\n\n// HELPERS FOR CONSTRUCTING WORLD\n\n// Possible distributions for arms\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\n\n\n// Construct Bandit POMDP\nvar getBandit = function(numberOfTrials) {\n return makeBanditPOMDP({\n numberOfArms: 2,\n armToPrizeDist: { 0: probably0Dist, 1: probably1Dist },\n numberOfTrials,\n numericalPrizes: true\n });\n};\n\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n\n// Get score for a single episode of bandits\nvar score = function(out) {\n return listMean(map(getUtility, out));\n};\n\n\n// Agent prior on arm rewards\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// True latentState:\n// arm0 is probably0Dist, arm1 is probably1Dist (and so is better)\n\n// Agent prior on arms: arm1 (better arm) has higher EV\nvar priorArm0 = function() {\n return categorical([0.5, 0.5], [probably1Dist, probably0Dist]);\n};\nvar priorArm1 = function(){\n return categorical([0.6, 0.4], [probably1Dist, probably0Dist]);\n};\n\n\nvar runAgents = function(numberOfTrials) {\n // Construct world and agents\n var bandit = getBandit(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n\n var agentPrior = getAgentPrior(numberOfTrials, priorArm0, priorArm1);\n var agentParams = getParams(agentPrior);\n\n var optimalAgent = makeBanditAgent(agentParams, bandit, 'belief');\n var myopicAgent = makeBanditAgent(agentParams, bandit, 'beliefDelay');\n\n // Get average score across totalTime for both agents\n var runOptimal = function() {\n return score(simulatePOMDP(startState, world, optimalAgent, 'states')); \n };\n\n var runMyopic = function() {\n return score(simulatePOMDP(startState, world, myopicAgent, 'states'));\n };\n\n var optimalDatum = {\n numberOfTrials,\n runtime: timeit(runOptimal).runtimeInMilliseconds*0.001,\n agentType: 'optimal'\n };\n\n var myopicDatum = {\n numberOfTrials,\n runtime: timeit(runMyopic).runtimeInMilliseconds*0.001,\n agentType: 'myopic'\n };\n\n return [optimalDatum, myopicDatum];\n};\n///\n\n// Compute runtime as # Bandit trials increases\nvar totalTimeValues = _.range(9).slice(2);\n\nprint('Runtime in s for [Optimal, Myopic] agents:');\n\nvar runtimeValues = _.flatten(map(runAgents, totalTimeValues));\n\nviz.line(runtimeValues, { groupBy: 'agentType' });\n~~~~\n\n\n### Myopic Updating for the Restaurant Search Problem\n\nThe Update-myopic agent assumes they will not update beliefs after the bound $$C_m$$ and so does not make plans that depend on learning something after the bound.\n\nWe illustrate this limitation with a new problem:\n\n>**Restaurant Search:** You are looking for a good restaurant in a foreign city without the aid of a smartphone. You know the quality of some restaurants already and you are uncertain about the others. If you walk right up to a restaurant, you can tell its quality by seeing how busy it is inside. You care about the quality of the restaurant and about minimizing the time spent walking.\n\nHow does the Update-myopic agent fail? Suppose that a few blocks from agent is a great restaurant next to a bad restaurant and the agent doesn't know which is which. If the agent checked inside each restaurant, they would pick out the great one. But if they are Update-myopic, they assume they'd be unable to tell between them.\n\nThe codebox below depicts a toy version of this problem in Gridworld. The restaurants vary in quality between 0 and 5. The agent knows the quality of Restaurant A and is unsure about the other restaurants. One of Restaurants D and E is great and the other is bad. The Optimal POMDP agent will go right up to each restaurant and find out which is great. The Update-myopic agent, with low enough bound $$C_m$$, will either go to the known good restaurant A or investigate one of the restaurants that is closer than D and E.\n\n<!--TODO: Toy version is lame (too small). Why is the myopic version so slow?\n\nTODO: gridworld draw should take pomdp trajectories. they should also take POMDP as \"world\". \n-->\n\n<!-- optimal_agent_restaurant_search -->\n~~~~\nvar pomdp = makeRestaurantSearchPOMDP();\nvar world = pomdp.world;\nvar makeUtilityFunction = pomdp.makeUtilityFunction;\nvar startState = pomdp.startState;\n\nvar agentPrior = Infer({ model() {\n var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite)\n var latentState = {\n A: 3,\n B: uniformDraw(_.range(6)),\n C: uniformDraw(_.range(6)),\n D: rewardD,\n E: 5 - rewardD\n };\n return {\n manifestState: pomdp.startState.manifestState, \n latentState\n };\n}});\n\n// Construct optimal agent\nvar params = {\n utility: makeUtilityFunction(-0.01), // timeCost is -.01\n alpha: 1000,\n priorBelief: agentPrior\n};\n\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\nprint('Quality of restaurants: \\n' + \n JSON.stringify(pomdp.startState.latentState));\nviz.gridworld(pomdp.mdp, { trajectory: manifestStates });\n~~~~\n\n>**Exercise:** The codebox below shows the behavior the Update-myopic agent. Try different values for the `myopicBound` parameter. For values in $$[1,2,3]$$, explain the behavior of the Update-myopic agent. \n\n<!-- myopic_agent_restaurant_search -->\n~~~~\n///fold: Construct world and agent prior as above\nvar pomdp = makeRestaurantSearchPOMDP();\nvar world = pomdp.world;\nvar makeUtilityFunction = pomdp.makeUtilityFunction;\n\nvar agentPrior = Infer({ model() {\n var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite)\n var latentState = {\n A: 3,\n B: uniformDraw(_.range(6)),\n C: uniformDraw(_.range(6)),\n D: rewardD,\n E: 5 - rewardD\n };\n return {\n manifestState: pomdp.startState.manifestState, \n latentState\n };\n}});\n///\n\nvar myopicBound = 1;\n\nvar params = {\n utility: makeUtilityFunction(-0.01),\n alpha: 1000,\n priorBelief: agentPrior,\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: myopicBound }\n};\n\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nprint('Rewards for each restaurant: ' + \n JSON.stringify(pomdp.startState.latentState));\nprint('Myopic bound: ' + myopicBound);\nviz.gridworld(pomdp.mdp, { trajectory: manifestStates });\n~~~~\n\nNext chapter: [Joint inference of biases and preferences I](/chapters/5d-joint-inference.html)\n\n<br>\n\n### Footnotes\n", "date_published": "2017-03-19T18:54:16Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5c-myopic.md"} |
| {"id": "f9925fa4aa8c50448d99bfdb6889ffa9", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3a-mdp.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Sequential decision problems: MDPs\"\ndescription: Markov Decision Processes, efficient planning with dynamic programming.\n---\n\n## Introduction\n\nThe [previous chapter](/chapters/3-agents-as-programs.html) introduced agent models for solving simple, one-shot decision problems. The next few sections introduce *sequential* problems, where an agent's choice of action *now* depends on the actions they will choose in the future. As in game theory, the decision maker must coordinate with another rational agent. But in sequential decision problems, that rational agent is their future self.\n\nAs a simple illustration of a sequential decision problem, suppose that an agent, Bob, is looking for a place to eat. Bob gets out of work in a particular location (indicated below by the blue circle). He knows the streets and the restaurants nearby. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. Here is a visualization of the street layout. The labels refer to different types of restaurants: a chain selling Donuts, a Vegetarian Salad Bar and a Noodle Shop. \n\n~~~~\nvar ___ = ' '; \nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({ grid, start: [3, 1] });\n\nviz.gridworld(mdp.world, { trajectory : [mdp.startState] });\n~~~~\n\n<a id=\></a>\n\n## Markov Decision Processes: Definition\n\nWe represent Bob's decision problem as a Markov Decision Process (MDP) and, more specifically, as a discrete \"Gridworld\" environment. An MDP is a tuple $$ \\left\\langle S,A(s),T(s,a),U(s,a) \\right\\rangle$$, including the *states*, the *actions* in each state, the *transition function* that maps state-action pairs to successor states, and the *utility* or *reward* function. In our example, the states $$S$$ are Bob's locations on the grid. At each state, Bob selects an action $$a \\in \\{ \\text{up}, \\text{down}, \\text{left}, \\text{right} \\} $$, which moves Bob around the grid (according to transition function $$T$$). In this example we assume that Bob's actions, as well as the transitions and utilities, are all deterministic. However, our approach generalizes to noisy actions, stochastic transitions and stochastic utilities.\n\nAs with the one-shot decisions of the previous chapter, the agent in an MDP will choose actions that *maximize expected utility*. This depends on the total utility of the *sequence* of states that the agent visits. Formally, let $$EU_{s}[a]$$ be the expected (total) utility of action $$a$$ in state $$s$$. The agent's choice is a softmax function of this expected utility:\n\n$$\nC(a; s) \\propto e^{\\alpha EU_{s}[a]}\n$$\n\nThe expected utility depends on both immediate utility and, recursively, on future expected utility:\n\n<a id=\>**Expected Utility Recursion**</a>:\n\n$$\nEU_{s}[a] = U(s, a) + \\mathbb{E}_{s', a'}(EU_{s'}[a'])\n$$\n\n<br>\nwith the next state $$s' \\sim T(s,a)$$ and $$a' \\sim C(s')$$. The decision problem ends either when a *terminal* state is reached or when the time-horizon is reached. (In the next few chapters the time-horizon will always be finite). \n\nThe intuition to keep in mind for solving MDPs is that the expected utility propagates backwards from future states to the current action. If a high utility state can be reached by a sequence of actions starting from action $$a$$, then action $$a$$ will have high expected utility -- *provided* that the sequence of actions is taken with high probability and there are no low utility steps along the way.\n\n\n## Markov Decision Processes: Implementation\n\nThe recursive decision rule for MDP agents can be directly translated into WebPPL. The `act` function takes the agent's state as input, evaluates the expectation of actions in that state, and returns a softmax distribution over actions. The expected utility of actions is computed by a separate function `expectedUtility`. Since an action's expected utility depends on future actions, `expectedUtility` calls `act` in a mutual recursion, bottoming out when a terminal state is reached or when time runs out. \n\nWe illustrate this \"MDP agent\" on a simple MDP:\n\n### Integer Line MDP\n- **States**: Points on the integer line (e.g -1, 0, 1, 2).\n\n- **Actions/transitions**: Actions \"left\", \"right\" and \"stay\" move the agent deterministically along the line in either direction.\n\n- **Utility**: The utility is $$1$$ for the state corresponding to the integer $$3$$ and is $$0$$ otherwise. \n\n\nHere is a WebPPL agent that starts at the origin (`state === 0`) and that takes a first step (to the right):\n\n~~~~\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n \n var act = function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n };\n\n var expectedUtility = function(state, action, timeLeft){\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n };\n\n return { act };\n}\n\nvar act = makeAgent().act;\n\nvar startState = 0;\nvar totalTime = 4;\n\n// Agent's move '-1' means 'left', '0' means 'stay', '1' means 'right'\nprint(\ + sample(act(startState, totalTime)));\n~~~~\n\nThis code computes the agent's initial action, given that the agent will get to take four actions in total. To simulate the agent's entire trajectory, we add a third function `simulate`, which updates and stores the world state in response to the agent's actions: \n\n~~~~\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n var act = function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n };\n\n var expectedUtility = function(state, action, timeLeft) {\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0) {\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n };\n\n return { act };\n}\n\n\nvar act = makeAgent().act;\n\nvar simulate = function(state, timeLeft){\n if (timeLeft === 0){\n return [];\n } else {\n var action = sample(act(state, timeLeft));\n var nextState = transition(state, action); \n return [state].concat(simulate(nextState, timeLeft - 1))\n }\n};\n\nvar startState = 0;\nvar totalTime = 4;\nprint(\"Agent's trajectory: \Restaurant Choice\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : , : , : [, , , ], : [], : } |
| {: , : , : , : , : , : Reinforcement Learning to Learn MDPs\/chapters/3c-pomdp.html#bandits\utility\/chapters/3a-mdp.html#mdp\reward\reward\greedy\coin-weights\regret\optimistic\optimism in the face of uncertainty\bad\Thompson sampling\function approximation\Bayesian Q-learning\model\exploit\/assets/img/3d-gridworld.png\gridworld ground-truth\width: 400px;\G\manifest\break\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : Time inconsistency I\pre-commitment\time preference\in the continuous time setting, the only discount function such that the optimal policy doesn't vary in time is exponential discounting\". In the discrete-time setting, refp:lattimore2014general prove the same result, as well as discussing optimal strategies for sophisticated time-inconsistent agents.\n\nWhat are the effects of exponential discounting? We return to the deterministic Bandit problem from Chapter III.3 (see Figure 1). Suppose a person decides every year where to go on a skiing vacation. There is a fixed set of options {Tahoe, Chile, Switzerland} and a finite time horizon[^bandit]. The person discounts exponentially and so they prefer a good vacation now to an even better one in the future. This means they are less likely to *explore*, since exploration takes time to pay off.\n\n\n<img src=\"/assets/img/5a-irl-bandit.png\" alt=\"diagram\" style=\"width: 600px;\"/>\n\n>**Figure 1**: Deterministic Bandit problem. The agent tries different arms/destinations and receives rewards. The reward for Tahoe is known but Chile and Switzerland are both unknown. The actual best option is Tahoe. \n<br>\n\n[^bandit]: As noted above, exponential discounting is usually combined with an *unbounded* time horizon. However, if a human makes a series of decisions over a long time scale, then it makes sense to include their time preference. For this particular example, imagine the person is looking for the best skiing or sports facilities and doesn't care about variety. There could be a known finite time horizon because at some age they are too old for adventurous skiing. \n\n<!-- exponential_discount_vs_optimal_bandits -->\n~~~~\n///fold:\nvar baseParams = {\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive'\n};\n\nvar armToPlace = function(arm){\n return {\n 0: \,\n 1: \,\n 2: \\n }[arm];\n};\n\nvar display = function(trajectory) {\n return map(armToPlace, most(trajectory));\n};\n///\n\n// Arms are skiing destinations:\n// 0: \, 1: \, 2: \\n\n// Actual utility for each destination\nvar trueArmToPrizeDist = {\n 0: Delta({ v: 1 }),\n 1: Delta({ v: 0 }),\n 2: Delta({ v: 0.5 })\n};\n\n// Constuct Bandit world\nvar numberOfTrials = 10;\nvar bandit = makeBanditPOMDP({\n numberOfArms: 3,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials,\n numericalPrizes: true\n});\n\nvar world = bandit.world;\nvar start = bandit.startState;\n\n// Agent prior for utility of each destination\nvar priorBelief = Infer({ model() {\n var armToPrizeDist = {\n // Tahoe has known utility 1:\n 0: Delta({ v: 1 }),\n // Chile has high variance:\n 1: categorical([0.9, 0.1],\n [Delta({ v: 0 }), Delta({ v: 5 })]),\n // Switzerland has high expected value:\n 2: uniformDraw([Delta({ v: 0.5 }), Delta({ v: 1.5 })]) \n };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n}});\n\nvar discountFunction = function(delay) {\n return Math.pow(0.5, delay);\n};\n\nvar exponentialParams = extend(baseParams, { discountFunction, priorBelief });\nvar exponentialAgent = makeBanditAgent(exponentialParams, bandit,\n 'beliefDelay');\nvar exponentialTrajectory = simulatePOMDP(start, world, exponentialAgent, 'actions');\n\nvar optimalParams = extend(baseParams, { priorBelief });\nvar optimalAgent = makeBanditAgent(optimalParams, bandit, 'belief');\nvar optimalTrajectory = simulatePOMDP(start, world, optimalAgent, 'actions');\n\n\nprint('exponential discounting trajectory: ' + display(exponentialTrajectory));\nprint('\\noptimal trajectory: ' + display(optimalTrajectory));\n~~~~\n\n \n#### Discounting and time inconsistency\n\nExponential discounting is typically thought of as a *relative* time preference. A fixed reward will be discounted by a factor of $$\\delta^{-30}$$ if received on Day 30 rather than Day 0. On Day 30, the same reward is discounted by $$\\delta^{-30}$$ if received on Day 60 and not at all if received on Day 30. This relative time preference is \ in a superficial sense. With $$\\delta=0.95$$ per day (and linear utility in money), $100 after 30 days is worth $21 and $110 at 31 days is worth $22. Yet when the 30th day arrives, they are worth $100 and $105 respectively[^inconsistent]! The key point is that whereas these *magnitudes* have changed, the *ratios* stay fixed. Indeed, the ratio between a pair of outcomes stays fixed regardless of when the exponential discounter evaluates them. In summary: while a discounting agent evaluates two prospects in the future as worth little compared to similar near-term prospects, the agent agrees with their future self about which of the two future prospects is better.\n\n[^inconsistent]: One can think of exponential discounting in a non-relative way by choosing a fixed staring time in the past (e.g. the agent's birth) and discounting everything relative to that. This results in an agent with a preference to travel back in time to get higher rewards!\n\nAny smooth discount function other than an exponential will result in preferences that reverse over time refp:strotz1955myopia. So it's not so suprising that untutored humans should be subject to such reversals[^reversal]. Various functional forms for human discounting have been explored in the literature. We describe the *hyperbolic discounting* model refp:ainslie2001breakdown because it is simple and well-studied. Other functional form can be substituted into our models.\n\n[^reversal]: Without computational aids, human representations of discrete and continuous quantities (including durations in time and dollar values) are systematically inaccurate. See refp:dehaene2011number. \n\nHyperbolic and exponential discounting curves are illustrated in Figure 2. We plot the discount factor $$D$$ as a function of time $$t$$ in days, with constants $$\\delta$$ and $$k$$ controlling the slope of the function. In this example, each constant is set to 2. The exponential is:\n\n$$\nD=\\frac{1}{\\delta^t}\n$$\n\nThe hyperbolic function is:\n\n$$\nD=\\frac{1}{1+kt}\n$$\n\nThe crucial difference between the curves is that the hyperbola is initially steep and then becomes almost flat, while the exponential continues to be steep. This means that exponential discounting is time consistent and hyperbolic discounting is not. \n\n~~~~\nvar delays = _.range(7);\nvar expDiscount = function(delay) {\n return Math.pow(0.5, delay); \n};\nvar hypDiscount = function(delay) {\n return 1.0 / (1 + 2*delay);\n};\nvar makeExpDatum = function(delay){\n return {\n delay, \n discountFactor: expDiscount(delay),\n discountType: 'Exponential discounting: 1/2^t'\n };\n};\nvar makeHypDatum = function(delay){\n return {\n delay,\n discountFactor: hypDiscount(delay),\n discountType: 'Hyperbolic discounting: 1/(1 + 2t)'\n };\n};\nvar expData = map(makeExpDatum, delays);\nvar hypData = map(makeHypDatum, delays);\nviz.line(expData.concat(hypData), { groupBy: 'discountType' });\n~~~~\n\n>**Figure 2:** Graph comparing exponential and hyperbolic discount curves. \n\n<a id=\></a>\n>**Exercise:** We return to our running example but with slightly different numbers. The agent chooses between receiving $100 after 4 days or $110 after 5 days. The goal is to compute the preferences over each option for both exponential and hyperbolic discounters, using the discount curves shown in Figure 2. Compute the following:\n\n> 1. The discounted utility of the $100 and $110 rewards relative to Day 0 (i.e. how much the agent values each option when the rewards are 4 or 5 days away).\n>2. The discounted utility of the $100 and $110 rewards relative to Day 4 (i.e. how much each option is valued when the rewards are 0 or 1 day away).\n\n### Time inconsistency and sequential decision problems\n\nWe have shown that hyperbolic discounters have different preferences over the $100 and $110 depending on when they make the evaluation. This conflict in preferences leads to complexities in planning that don't occur in the optimal (PO)MDP agents which either discount exponentially or do not discount at all.\n\nConsider the example in the exercise <a href=#exercise>above</a> and imagine you have time inconsistent preferences. On Day 0, you write down your preference but on Day 4 you'll be free to change your mind. If you know your future self would choose the $100 immediately, you'd pay a small cost now to *pre-commit* your future self. However, if you believe your future self will share your current preferences, you won't pay this cost (and so you'll end up taking the $100). This illustrates a key distinction. Time inconsistent agents can be \"Naive\" or \"Sophisticated\":\n\n- **Naive agent**: assumes his future self shares his current time preference. For example, a Naive hyperbolic discounter assumes his far future self has a nearly flat discount curve (rather than the \"steep then flat\" discount curve he actually has). \n\n- **Sophisticated agent**: has the correct model of his future self's time preference. A Sophisticated hyperbolic discounter has a nearly flat discount curve for the far future but is aware that his future self does not share this discount curve.\n\nBoth kinds of agents evaluate rewards differently at different times. To distinguish a hyperbolic discounter's current and future selves, we refer to the agent acting at time $$t_i$$ as the $$t_i$$-agent. A Sophisticated agent, unlike a Naive agent, has an accurate model of his future selves. The Sophisticated $$t_0$$-agent predicts the actions of the $$t$$-agents (for $$t>t_0$$) that would conflict with his preferences. To prevent these actions, the $$t_0$$-agent tries to take actions that *pre-commit* the future agents to outcomes the $$t_0$$-agent prefers[^sophisticated].\n\n[^sophisticated]: As has been pointed out previously, there is a kind of \"inter-generational\" conflict between agent's future selves. If pre-commitment actions are available at time $$t_0$$, the $$t_0$$-agent does better in expectation if it is Sophisticated rather than Naive. Equivalently, the $$t_0$$-agent's future selves will do better if the agent is Naive.\n\n\n### Naive and Sophisticated Agents: Gridworld Example\n\nBefore describing our formal model and implementation of Naive and Sophisticated hyperbolic discounters, we illustrate their contrasting behavior using the Restaurant Choice example. We use the MDP version, where the agent has full knowledge of the locations of restaurants and of which restaurants are open. Recall the problem setup: \n\n>**Restaurant Choice**: Bob is looking for a place to eat. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. The restaurant options are: the Donut Store, the Vegetarian Salad Bar, and the Noodle Shop. The Donut Store is a chain with two local branches. We assume each branch has identical utility for Bob. We abbreviate the restaurant names as \"Donut South\", \"Donut North\", \"Veg\" and \"Noodle\".\n\nThe only difference from previous versions of Restaurant Choice is that restaurants now have *two* utilities. On entering a restaurant, the agent first receives the *immediate reward* (i.e. how good the food tastes) and at the next timestep receives the *delayed reward* (i.e. how good the person feels after eating it).\n\n**Exercise:** Run the codebox immediately below. Think of ways in which Naive and Sophisticated hyperbolic discounters with identical preferences (i.e. identical utilities for each restaurant) might differ for this decision problem. \n\n<!-- draw_choice -->\n~~~~\n///fold: restaurant choice MDP\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\nviz.gridworld(mdp.world, { trajectory: [mdp.startState] });\n~~~~\n\nThe next two codeboxes show the behavior of two hyperbolic discounters. Each agent has the same preferences and discount function. They differ only in that the first is Naive and the second is Sophisticated.\n\n<!-- draw_naive -->\n~~~~\n///fold: restaurant choice MDP, naiveTrajectory\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: naiveTrajectory });\n~~~~\n\n<!-- draw_sophisticated -->\n~~~~\n///fold: restaurant choice MDP, sophisticatedTrajectory\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: sophisticatedTrajectory });\n~~~~\n\n>**Exercise:** (Try this exercise *before* reading further). Your goal is to do preference inference from the observed actions in the codeboxes above (using only a pen and paper). The discount function is the hyperbola $$D=1/(1+kt)$$, where $$t$$ is the time from the present, $$D$$ is the discount factor (to be multiplied by the utility) and $$k$$ is a positive constant. Find a single setting for the utilities and discount function that produce the behavior in both the codeboxes above. This includes utilities for the restaurants (both *immediate* and *delayed*) and for the `timeCost` (the negative utility for each additional step walked), as well as the discount constant $$k$$. Assume there is no softmax noise. \n\n------\n\nThe Naive agent goes to Donut North, even though Donut South (which has identical utility) is closer to the agent's starting point. One possible explanation is that the Naive agent has a higher utility for Veg but gets \ by Donut North on their way to Veg[^naive_path].\n\n[^naive_path]: At the start, no restaurants can be reached quickly and so the agent's discount function is nearly flat when evaluating each one of them. This makes Veg look most attractive (given its higher overall utility). But going to Veg means getting closer to Donut North, which becomes more attractive than Veg once the agent is close to it (because of the discount function). Taking an inefficient path -- one that is dominated by another path -- is typical of time-inconsistent agents. \n\nThe Sophisticated agent can accurately model what it *would* do if it ended up in location [3,5] (adjacent to Donut North). So it avoids temptation by taking the long, inefficient route to Veg. \n\nIn this simple example, the Naive and Sophisticated agents each take paths that optimal time-consistent MDP agents (without softmax noise) would never take. So this is an example where a bias leads to a *systematic* deviation from optimality and behavior that is not predicted by an optimal model. In Chapter 5.3 we explore inference of preferences for time inconsistent agents.\n\nNext chapter: [Time inconsistency II](/chapters/5b-time-inconsistency.html)\n\n<br>\n\n### Footnotes\n", "date_published": "2019-08-24T14:52:08Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5a-time-inconsistency.md"} |
| {"id": "8adf0ba4ce94372feb4380f99a96c790", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6c-inference-rl.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Reinforcement learning techniques\ndescription: Max-margin and linear programming methods for IRL.\nstatus: stub\nis_section: false\nhidden: true\n---\n\n- Could have appendix discussing Apprenticeship Learning ideas in Abbeel and Ng in more detail.\n", "date_published": "2016-03-09T21:34:01Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6c-inference-rl.md"} |
| {"id": "5e5b2764fe4ae3054bfcbde84adab3f0", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3c-pomdp.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Environments with hidden state: POMDPs\"\ndescription: Mathematical formalism for POMDPs, Bandit and Restaurant Choice examples. \n---\n\n\n \n## Introduction: Learning about the world from observation\n\nThe previous chapters made two strong assumptions that often fail in practice. First, we assumed the environment was an MDP, where the state is fully observed by the agent at all times. Second, we assumed that the agent starts off with *full knowledge* of the MDP -- rather than having to learn its parameters from experience. This chapter relaxes the first assumption by introducing POMDPs. The next [chapter](/chapters/3d-reinforcement-learning.html) introduces **reinforcement learning**, an approach to learning MDPs from experience. \n\n\n## POMDP Agent Model\n\n### Informal overview\n\nIn an MDP the agent observes the full state of the environment at each timestep. In Gridworld, for instance, the agent always knows their precise position and is uncertain only about their future position. Yet in real-world problems, the agent often does not observe the full state every timestep. For example, suppose you are sailing at night without any navigation instruments. You might be very uncertain about your precise position and you only learn about it indirectly, by waiting to observe certain landmarks in the distance. For environments where the state is only observed partially and indirectly, we use Partially Observed Markov Decision Processes (POMDPs). \n\nIn a Partially Observed Markov Decision Process (POMDP), the agent knows the transition function of the environment. This distinguishes POMDPs from [Reinforcement Learning](/chapters/3d-reinforcement-learning.html) problems. However, the agent starts each episode uncertain about the precise state of the environment. For example, if the agent is choosing where to eat on holiday, they may be uncertain about their own location and uncertain about which restaurants are open. \n\nThe agent learns about the state indirectly via *observations*. At each timestep, they receive an observation that depends on the true state and their previous action (according to a fixed *observation function*). They update a probability distribution on the current state and then choose an action. The action causes a state transition just like in an MDP but the agent only receives indirect evidence about the new state.\n\nAs an example, consider the <a href=\"/chapters/3a-mdp.html#restaurant_choice\">Restaurant Choice Problem</a>. Suppose Bob doesn't know whether the Noodle Shop is open. Previously, the agent's state consisted of Bob's location on the grid as well as the remaining time. In the POMDP case, the state also represents whether or not the Noodle Shop is open, which determines whether Bob can enter the Noodle Shop. When Bob gets close enough to the Noodle Shop, he will observe whether or not it's open. Bob's planning should take this into account: if the Noodle Shop is closed then Bob will observe this can simply head to a different restaurant. \n\n\n\n### Formal model\n\n<a id=\></a>\nWe first define the class of decision probems (POMDPs) and then define an agent model for optimally solving these problems. Our definitions are based on reft:kaelbling1998planning.\n\nA Partially Observable Markov Decision Process (POMDP) is a tuple $$ \\left\\langle S,A(s),T(s,a),U(s,a),\\Omega,O \\right\\rangle$$, where:\n\n- $$S$$ (state space), $$A$$ (action space), $$T$$ (transition function), $$U$$ (utility or reward function) form an MDP as defined in [chapter 3.1](/chapters/3a-mdp.html), with $$U$$ assumed to be deterministic[^utility]. \n\n- $$\\Omega$$ is the finite space of observations the agent can receive.\n\n- $$O$$ is a function $$ O\\colon S \\times A \\to \\Delta \\Omega $$. This is the *observation function*, which maps an action $$a$$ and the state $$s'$$ resulting from taking $$a$$ to an observation $$o \\in \\Omega$$ drawn from $$O(s',a)$$.\n\n[^utility]: In the RL literature, the utility or reward function is often allowed to be *stochastic*. Our agent models assume that the agent's utility function is deterministic. To represent environments with stochastic \"rewards\", we treat the reward as a stochastic part of the environment (i.e. the world state). So in a Bandit problem, instead of the agent receiving a (stochastic) reward $$R$$, they transition to a state to which they assign a fixed utility $$R$$. (Why do we avoid stochastic utilities? One focus of this tutorial is inferring an agent's preferences. The preferences are fixed over time and non-stochastic. We want to identify the agent's utility function with their preferences). \n\nSo at each timestep, the agent transitions from state $$s$$ to state $$s' \\sim T(s,a)$$ (where $$s$$ and $$s'$$ are generally unknown to the agent) having performed action $$a$$. On entering $$s'$$ the agent receives an observation $$o \\sim O(s',a)$$ and a utility $$U(s,a)$$. \n\nTo characterize the behavior of an expected-utility maximizing agent, we need to formalize the belief-updating process. Let $$b$$, the current belief function, be a probability distribution over the agent's current state. Then the agent's succesor belief function $$b'$$ over their next state is the result of a Bayesian update on the observation $$o \\sim O(s',a)$$ where $$a$$ is the agent's action in $$s$$. That is:\n\n<a id=\></a>**Belief-update formula:**\n\n$$\nb'(s') \\propto O(s',a,o)\\sum_{s \\in S}{T(s,a,s')b(s)}\n$$\n\nIntuitively, the probability that $$s'$$ is the new state depends on the marginal probability of transitioning to $$s'$$ (given $$b$$) and the probability of the observation $$o$$ occurring in $$s'$$. The relation between the variables in a POMDP is summarized in Figure 1 (below).\n\n<img src=\"/assets/img/pomdp_graph.png\" alt=\"diagram\" style=\"width: 400px;\"/>\n\n>**Figure 1:** The dependency structure between variables in a POMDP.\n\nThe ordering of events in Figure 1 is as follows:\n\n>(1). The agent chooses an action $$a$$ based on belief distribution $$b$$ over their current state (which is actually $$s$$).\n\n>(2). The agent gets utility $$u = U(s,a)$$ when leaving state $$s$$ having taken $$a$$.\n\n>(3). The agent transitions to state $$s' \\sim T(s,a)$$, where it gets observation $$o \\sim O(s',a)$$ and updates its belief to $$b'$$ by updating $$b$$ on the observation $$o$$.\n\nIn our previous agent model for MDPs, we defined the expected utility of an action $$a$$ in a state $$s$$ recursively in terms of the expected utility of the resulting pair of state $$s'$$ and action $$a'$$. This same recursive characterization of expected utility still holds. The important difference is that the agent's action $$a'$$ in $$s'$$ depends on their updated belief $$b'(s')$$. Hence the expected utility of $$a$$ in $$s$$ depends on the agent's belief $$b$$ over the state $$s$$. We call the following the **POMDP Expected Utility of State Recursion**. This recursion defines the function $$EU_{b}$$, which is analogous to the *value function*, $$V_{b}$$, in reft:kaelbling1998planning.\n\n<a id=\></a>**POMDP Expected Utility of State Recursion:**\n\n$$\nEU_{b}[s,a] = U(s,a) + \\mathbb{E}_{s',o,a'}(EU_{b'}[s',a'_{b'}])\n$$\n\nwhere:\n\n- we have $$s' \\sim T(s,a)$$ and $$o \\sim O(s',a)$$\n\n- $$b'$$ is the updated belief function $$b$$ on observation $$o$$, as defined <a href=\"#belief\">above</a>\n\n- $$a'_{b'}$$ is the softmax action the agent takes given belief $$b'$$\n\nThe agent cannot use this definition to directly compute the best action, since the agent doesn't know the state. Instead the agent takes an expectation over their belief distribution, picking the action $$a$$ that maximizes the following:\n\n$$\nEU[b,a] = \\mathbb{E}_{s \\sim b}(EU_{b}[s,a])\n$$\n\nWe can also represent the expected utility of action $$a$$ given belief $$b$$ in terms of a recursion on the successor belief state. We call this the **Expected Utility of Belief Recursion**, which is closely related to the Bellman Equations for POMDPs: <a id=\"pomdp_eu_belief\"></a>\n\n$$\nEU[b,a] = \\mathbb{E}_{s \\sim b}( U(s,a) + \\mathbb{E}_{s',o,a'}(EU[b',a']) )\n$$\n\nwhere $$s'$$, $$o$$, $$a'$$ and $$b'$$ are distributed as in the Expected Utility of State Recursion.\n\nUnfortunately, finding the optimal policy for POMDPs is intractable. Even in the special case where observations are deterministic and the horizon is finite, determining whether the optimal policy has expected utility greater than some constant is PSPACE-complete refp:papadimitriou1987complexity.\n\n### Implementation of the Model\n<a id=\></a>\n\nAs with the agent model for MDPs, we provide a direct translation of the equations above into an agent model for solving POMDPs. The variables `nextState`, `nextObservation`, `nextBelief`, and `nextAction` correspond to $$s'$$, $$o$$, $$b'$$ and $$a'$$ respectively, and we use the Expected Utility of Belief Recursion.\n\n<!-- pomdp_agent -->\n~~~~\n\nvar updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var state = sample(belief);\n var predictedNextState = transition(state, action);\n var predictedObservation = observe(predictedNextState);\n condition(_.isEqual(predictedObservation, observation));\n return predictedNextState;\n }});\n};\n\nvar act = function(belief) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n var eu = expectedUtility(belief, action);\n factor(alpha * eu);\n return action;\n }});\n};\n\nvar expectedUtility = function(belief, action) {\n return expectation(\n Infer({ model() {\n var state = sample(belief);\n var u = utility(state, action);\n if (state.terminateAfterAction) {\n return u;\n } else {\n var nextState = transition(state, action);\n var nextObservation = observe(nextState);\n var nextBelief = updateBelief(belief, nextObservation, action);\n var nextAction = sample(act(nextBelief));\n return u + expectedUtility(nextBelief, nextAction);\n }\n }}));\n};\n\n// To simulate the agent, we need to transition\n// the state, sample an observation, then\n// compute agent's action (after agent has updated belief).\n\n// *startState* is agent's actual startState (unknown to agent)\n// *priorBelief* is agent's initial belief function\n\nvar simulate = function(startState, priorBelief) {\n\n var sampleSequence = function(state, priorBelief, action) {\n var observation = observe(state);\n var belief = updateBelief(priorBelief, observation, action);\n var action = sample(act(belief));\n var output = [ [state, action] ];\n\n if (state.terminateAfterAction){\n return output;\n } else {\n var nextState = transition(state, action);\n return output.concat(sampleSequence(nextState, belief, action));\n }\n };\n return sampleSequence(startState, priorBelief, 'noAction');\n};\n~~~~\n\n## Applying the POMDP agent model\n\n<a id='bandits'></a>\n\n### Multi-arm Bandits\n\n[Multi-armed Bandits](https://en.wikipedia.org/wiki/Multi-armed_bandit) are an especially simple class of sequential decision problem. A Bandit problem has a single state and multiple actions (\), where each arm has a distribution on rewards/utilities that is initially unknown. The agent has a finite time horizon and must balance exploration (i.e. learn about the reward distribution) with exploitation (obtain reward). \n\nBandits can be modeled as Reinforcement Learning problems, where the agent learns a good policy for an initially unknown MDP. This is the practical way to solve Bandits and the next [chapter](/chapters/3d-reinforcement-learning.html) illustrates this approach. Here we model Bandits as POMDPs and use the code above to find the optimal policy for some toy Bandit problems[^optimal]. (We choose a Bandit example to demonstrate the difficulty of exactly solving even the simplest POMDPs.)\n\n[^optimal]: In the standard Bandit problem, there is a single unknown MDP characterized by the reward distribution of each arm. In a more challenging generalization, the agent faces a sequence of random Bandit problems that are drawn from some prior. If we treat a standard Bandit as a POMDP, we compute the Bayes optimal policy for the single Bandit and by doing so, we implicitly compute the optimal policy for a sequence of Bandits drawn from the same prior. This is analogous to finding the optimal policy for an MDP: the optimal policy covers every possible state, including those occurring with tiny probability. Model-free RL, by contrast, will focus on the states that are actually encountered in practice.\n\nIn our examples, the arms are labeled with integers and arm $$i$$ has Bernoulli distributed rewards with parameter $$\\theta_i$$. In the first codebox (below), the true reward distribution, $$(\\theta_0,\\theta_1)$$, is $$(0.7,0.8)$$ but the agent's prior is uniform over $$(0.7,0.8)$$ and $$(0.7,0.2)$$. So the agent's only uncertainty is over $$\\theta_1$$. \n\nRather than implement everything in the codebox, we use the library [webppl-agents](https://github.com/agentmodels/webppl-agents). This includes functions for constructing a Bandit environment (`makeBanditPOMDP`), for constructing a POMDP agent (`makePOMDPAgent`) and for running the agent on the environment (`simulatePOMDP`). This [chapter](/chapters/guide-library.html) explains how to use webppl-agents. The <a href=\>Appendix</a> includes a codebox with a full implementation of a POMDP agent on a Bandit problem. \n\n\n~~~~\n///fold: displayTrajectory\n\n// Takes a trajectory containing states and actions and returns one containing\n// locs and actions, getting rid of 'start' and the final meaningless action.\nvar displayTrajectory = function(trajectory) {\n var getPrizeAction = function(stateAction) {\n var state = stateAction[0];\n var action = stateAction[1];\n return [state.manifestState.loc, action];\n };\n\n var prizesActions = map(getPrizeAction, trajectory);\n var flatPrizesActions = _.flatten(prizesActions);\n var actionsPrizes = flatPrizesActions.slice(1, flatPrizesActions.length - 1);\n\n var printOut = function(n) {\n print('\\n Arm: ' + actionsPrizes[2*n] + ' -- Prize: '\n + actionsPrizes[2*n + 1]);\n };\n return map(printOut, _.range((actionsPrizes.length)*0.5));\n};\n///\n\n\n// 1. Construct Bandit POMDP\n\n// Reward distributions are Bernoulli\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\n// True reward distributions are [.7,.8].\nvar armToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.8)\n};\n\n// But the agent's prior is uniform over [.7,.8] and [.7,.2].\nvar alternateArmToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.2)\n}\n \n// Options for library function for Bandits. Number of trials = horizon.\nvar banditOptions = {\n numberOfArms: 2,\n armToPrizeDist: armToRewardDist, \n numberOfTrials: 11,\n numericalPrizes: true\n};\n\nvar bandit = makeBanditPOMDP(banditOptions);\nvar startState = bandit.startState;\nvar world = bandit.world;\n\n\n// 2. Construct POMDP agent\n\n// Prior as described above and *latentState* is an implementation detail\n// for the libraries implementation of POMDPs\nvar priorBelief = Infer({ model() {\n var armToRewardDist = uniformDraw([armToRewardDist,\n alternateArmToRewardDist]);\n return extend(startState, { latentState: armToRewardDist });\n}});\n\n\nvar utility = function(state, action) {\n var reward = state.manifestState.loc;\n return reward === 'start' ? 0 : reward;\n};\n\nvar params = { \n priorBelief, \n utility,\n alpha: 1000 \n};\n\nvar agent = makePOMDPAgent(params, bandit.world);\n\n\n// 3. Simulate agent and return state-action pairs\n\nvar trajectory = simulatePOMDP(startState, world, agent, 'stateAction');\ndisplayTrajectory(trajectory);\n\n~~~~\n\n\nSolving Bandit problems using the simple dynamic approach of our POMDP agent does not blows up as the horizon (\"number of trials\") and number of arms increase. The codebox below shows how runtime scales as a function of the number of trials. (This takes approximately 20 seconds to run.)\n\n<!-- bandit_scaling_number_of_trials -->\n\n~~~~\n///fold: Construct world and agent priorBelief as above\n\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\n// True reward distributions are [.7,.8].\nvar armToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.8)\n};\n\n// But the agent's prior is uniform over [.7,.8] and [.7,.2].\nvar alternateArmToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.2)\n}\n\nvar makeBanditWithNumberOfTrials = function(numberOfTrials) {\n return makeBanditPOMDP({\n numberOfTrials,\n\tnumberOfArms: 2,\n\tarmToPrizeDist: armToRewardDist,\n\tnumericalPrizes: true\n });\n};\n\nvar getPriorBelief = function(numberOfTrials){\n return Infer({ model() {\n var armToPrizeDist = uniformDraw([armToRewardDist,\n alternateArmToRewardDist]);\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }})\n};\n\nvar baseParams = { alpha: 1000 };\n///\n\n// Simulate agent for a given number of Bandit trials\nvar getRuntime = function(numberOfTrials) {\n var bandit = makeBanditWithNumberOfTrials(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n var priorBelief = getPriorBelief(numberOfTrials)\n var params = extend(baseParams, { priorBelief });\n var agent = makeBanditAgent(params, bandit, 'belief');\n\n var f = function() {\n return simulatePOMDP(startState, world, agent, 'stateAction');\n };\n \n return timeit(f).runtimeInMilliseconds.toPrecision(3) * 0.001;\n};\n\n// Runtime as a function of number of trials\nvar numberOfTrialsList = _.range(15).slice(2);\nvar runtimes = map(getRuntime, numberOfTrialsList);\nviz.line(numberOfTrialsList, runtimes);\n\n~~~~\n\n\nScaling is much worse in the number of arms. The following may take over a minute to run:\n\n\n<!-- bandit_scaling_number_of_arms -->\n~~~~\n///fold:\n\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\nvar makeArmToRewardDist = function(numberOfArms) {\n return map(function(x) { return getRewardDist(0.8); }, _.range(numberOfArms));\n};\n\nvar armToRewardDistSampler = function(numberOfArms) {\n return map(function(x) { return uniformDraw([getRewardDist(0.2),\n getRewardDist(0.8)]); },\n _.range(numberOfArms));\n};\n\nvar getPriorBelief = function(numberOfTrials, numberOfArms) {\n return Infer({ model() {\n var armToRewardDist = armToRewardDistSampler(numberOfArms);\n return makeBanditStartState(numberOfTrials, armToRewardDist);\n }});\n};\n\nvar baseParams = {alpha: 1000};\n///\n\nvar getRuntime = function(numberOfArms) {\n var armToRewardDist = makeArmToRewardDist(numberOfArms);\n var options = {\n numberOfTrials: 5,\n\tarmToPrizeDist: armToRewardDist,\n\tnumberOfArms,\n\tnumericalPrizes: true\n };\n var numberOfTrials = options.numberOfTrials;\n var bandit = makeBanditPOMDP(options);\n var world = bandit.world;\n var startState = bandit.startState;\n var priorBelief = getPriorBelief(numberOfTrials, numberOfArms);\n var params = extend(baseParams, { priorBelief });\n var agent = makeBanditAgent(params, bandit, 'belief');\n\n var f = function() {\n return simulatePOMDP(startState, world, agent, 'stateAction');\n };\n\n return timeit(f).runtimeInMilliseconds.toPrecision(3) * 0.001;\n};\n\n// Runtime as a function of number of arms\nvar numberOfArmsList = [1, 2, 3];\nvar runtimes = map(getRuntime, numberOfArmsList);\nviz.line(numberOfArmsList, runtimes);\n~~~~\n\n\n### Gridworld with observations\n\nA person looking for a place to eat will not be *fully* informed about all local restaurants. This section extends the [Restaurant Choice problem](/chapters/3a-mdp.html) to represent an agent with uncertainty about which restaurants are open. The agent *observes* whether a restaurant is open by moving to one of the grid locations adjacent to the restaurant. If the restaurant is open, the agent can enter and receive utility. \n\n<!-- Removed to slim down the text\nIn this POMDP version of Restaurant Choice, a rational agent can exhibit behavior that never occurs in the MDP version:\n\n1. The agent thinks Donut South is closed and Donut North is open, and so goes to the further away Donut North (see next codebox). \n\n2. The agent goes to Noodle, see that it's closed and so takes the loop round to Veg. This route that doesn't make sense if Noodle is known to be closed (see second codebox). \n-->\n\nThe POMDP version of Restaurant Choice is built from the MDP version. States now have the form:\n\n>`{manifestState: { ... }, latentState: { ... }}`\n\nThe `manifestState` contains the features of the world that the agent always observes directly (and so always knows). This includes the remaining time and the agent's location in the grid. The `latentState` contains features that are only observable in certain states. In our examples, `latentState` specifies whether each restaurant is open or closed. The transition function for the POMDP is the same as the MDP except that if a restaurant is closed the agent cannot transition to it.\n\n\n\nThe next two codeboxes use the same POMDP, where all restaurants are open but for Noodle. The first agent prefers the Donut Store and believes (falsely) that Donut South is likely closed. The second agent prefers Noodle and believes (falsely) that Noodle is likely open.\n\n<!-- agent_thinks_donut_south_closed -->\n~~~~\n///fold:\nvar getPriorBelief = function(startManifestState, latentStateSampler){\n return Infer({ model() {\n return {\n manifestState: startManifestState, \n latentState: latentStateSampler()};\n }});\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___], \n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar pomdp = makeGridWorldPOMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\nvar utilityTable = {\n 'Donut N': 5,\n 'Donut S': 5,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.1\n};\nvar utility = function(state, action) {\n var feature = pomdp.feature;\n var name = feature(state.manifestState).name;\n if (name) {\n return utilityTable[name];\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar latent = {\n 'Donut N': true,\n 'Donut S': true,\n 'Veg': true,\n 'Noodle': false\n};\nvar alternativeLatent = extend(latent, {\n 'Donut S': false,\n 'Noodle': true\n});\n\nvar startState = {\n manifestState: { \n loc: [3, 1],\n terminateAfterAction: false,\n timeLeft: 11\n },\n latentState: latent\n};\n\nvar latentStateSampler = function() {\n return categorical([0.8, 0.2], [alternativeLatent, latent]);\n};\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentStateSampler);\nvar agent = makePOMDPAgent({ utility, priorBelief, alpha: 100 }, pomdp);\nvar trajectory = simulatePOMDP(startState, pomdp, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nviz.gridworld(pomdp.MDPWorld, { trajectory: manifestStates });\n~~~~\n\nHere is the agent that prefers Noodle and falsely belives that it is open:\n\n<!-- agent_thinks_noodle_open -->\n~~~~\n///fold: Same world, prior, start state, and latent state as previous codebox\nvar getPriorBelief = function(startManifestState, latentStateSampler){\n return Infer({ model() {\n return {\n manifestState: startManifestState, \n latentState: latentStateSampler()\n };\n }});\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___], \n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar pomdp = makeGridWorldPOMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar latent = {\n 'Donut N': true,\n 'Donut S': true,\n 'Veg': true,\n 'Noodle': false\n};\nvar alternativeLatent = extend(latent, {\n 'Donut S': false,\n 'Noodle': true\n});\n\nvar startState = {\n manifestState: { \n loc: [3, 1],\n terminateAfterAction: false,\n timeLeft: 11\n },\n latentState: latent\n};\n\nvar latentSampler = function() {\n return categorical([0.8, 0.2], [alternativeLatent, latent]);\n};\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentSampler);\n///\n\nvar utilityTable = {\n 'Donut N': 1,\n 'Donut S': 1,\n 'Veg': 3,\n 'Noodle': 5,\n 'timeCost': -0.1\n};\nvar utility = function(state, action) {\n var feature = pomdp.feature;\n var name = feature(state.manifestState).name;\n if (name) {\n return utilityTable[name];\n } else {\n return utilityTable.timeCost;\n }\n};\nvar agent = makePOMDPAgent({ utility, priorBelief, alpha: 100 }, pomdp);\nvar trajectory = simulatePOMDP(startState, pomdp, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nviz.gridworld(pomdp.MDPWorld, { trajectory: manifestStates });\n~~~~\n\n\nWhen does it make sense to treat this Restaurant Choice problem as a POMDP? As with Bandits, if the problem we face is a fixed (but initially unknown) MDP, and we get many episodes in which to learn by trial and error, then Reinforcement Learning is a simple and scalable approach. If the MDP varies with every episode (e.g. the hidden state of whether a restaurant is open varies from day to day), then POMDP methods may work better. (Even in the case where the MDP is fixed, if the stakes are very high, it will be best to solve for the optimal POMDP policy.) Finally, if our goal is to model human planning, then POMDP models are worth considering as they are more sample efficient than RL techniques (and humans can often solve planning problems in very few tries). \n\nThe next [chapter](/chapters/3d-reinforcement-learning.html) is on reinforcement learning, an approach which *learns* to solve an initially unknown MDP.\n\n\n\n<!-- TODO\n### Possible additions\n- Doing belief update online vs belief doing a batch update every time. Latter is good if belief updates are rare and if we are doing approximate inference (otherwise the errors in approximations will compound in some way). Maintaining observations is also good if your ability to do good approximate inference changes over time. (Or least maintaining compressed observations or some kind of compressed summary statistic of the observation -- e.g. .jpg or mp3 form). This is related to UDT vs CDT and possibly to the episodic vs. declarative memory in human psychology. [Add a different *updateBelief* function to illustrate.]\n-->\n\n\n<br>\n\n<a id=\"appendix\"></a>\n### Appendix: Complete Implementation of POMDP agent for Bandits\n\nWe apply the POMDP agent to a simplified variant of the Multi-arm Bandit Problem. In this variant, pulling an arm produces a *prize* deterministically. The agent begins with uncertainty about the mapping from arms to prizes and learns by trying the arms. In our example, there are only two arms. The first arm is known to have the prize \"chocolate\" and the second arm either has \"champagne\" or has no prize at all (\"nothing\"). See Figure 2 (below) for details.\n\n<img src=\"/assets/img/3c-irl-bandit.png\" alt=\"diagram\" style=\"width: 500px;\"/>\n\n>**Figure 2:** Diagram for deterministic Bandit problem used in the codebox below. The boxes represent possible deterministic mappings from arms to prizes. Each prize has a reward/utility $$u$$. On the right are the agent's initial beliefs about the probability of each mapping. The true mapping (i.e. true *latent state*) has a solid outline.\n\nIn our implementation of this problem, the two arms are labeled \ and \ respectively. The *action* of pulling `Arm0` is also labeled \ (and likewise for `Arm1`). After taking action `0`, the agent transitions to a state corresponding to the prize for `Arm0` and the gets to observe this prize. States are Javascript objects that contain a property for counting down the time (as in the MDP case) as well as a `prize` property. States also contain the *latent* mapping from arms to prizes (called `armToPrize`) that determines how an agent transitions on pulling an arm.\n\n~~~~\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Use latent \ mapping in state to\n// determine which prize agent gets\nvar transition = function(state, action){\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n prize: state.armToPrize[action], \n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft == 1\n });\n};\n\n// After pulling an arm, agent observes associated prize\nvar observe = function(state){\n return state.prize;\n};\n\n// Starting state specifies the latent state that agent tries to learn\n// (In order that *prize* is defined, we set it to 'start', which\n// has zero utilty for the agent). \nvar startState = { \n prize: 'start',\n timeLeft: 3, \n terminateAfterAction:false,\n armToPrize: { 0: 'chocolate', 1: 'champagne' }\n};\n~~~~\n\nHaving illustrated our implementation of the POMDP agent and the Bandit problem, we put the pieces together and simulate the agent's behavior. The `makeAgent` function is a simplified version of the library function `makeBeliefAgent` used throughout the rest of this tutorial[^makeBelief].\n\nThe <a href=\"#belief\">Belief-Update Formula</a> is implemented by `updateBelief`. Instead of hand-coding a Bayesian belief update, we simply use WebPPL's built in inference primitives. This approach means our POMDP agent can do any kind of inference that WebPPL itself can do. For this tutorial, we use the inference function `Enumerate`, which captures exact inference over discrete belief spaces. By changing the inference function, we get a POMDP agent that does approximate inference and simulates their future selves as doing approximate inference. This inference could be over discrete or continuous belief spaces. (WebPPL includes Particle Filters, MCMC, and Hamiltonian Monte Carlo for differentiable models). \n\n[^makeBelief]: One difference between the functions is that `makeAgent` uses the global variables `transition` and `observation`, instead of having a `world` parameter.\n\n~~~~\n///fold: Bandit problem is defined as above\n\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Use latent \ mapping in state to\n// determine which prize agent gets\nvar transition = function(state, action){\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n prize: state.armToPrize[action], \n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft == 1\n });\n};\n\n// After pulling an arm, agent observes associated prize\nvar observe = function(state){\n return state.prize;\n};\n\n// Starting state specifies the latent state that agent tries to learn\n// (In order that *prize* is defined, we set it to 'start', which\n// has zero utilty for the agent). \nvar startState = { \n prize: 'start',\n timeLeft: 3, \n terminateAfterAction:false,\n armToPrize: {0:'chocolate', 1:'champagne'}\n};\n///\n\n// Defining the POMDP agent\n\n// Agent params include utility function and initial belief (*priorBelief*)\n\nvar makeAgent = function(params) {\n var utility = params.utility;\n\n // Implements *Belief-update formula* in text\n var updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var state = sample(belief);\n var predictedNextState = transition(state, action);\n var predictedObservation = observe(predictedNextState);\n condition(_.isEqual(predictedObservation, observation));\n return predictedNextState;\n }});\n };\n\n var act = dp.cache(\n function(belief) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n var eu = expectedUtility(belief, action);\n factor(1000 * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(belief, action) {\n return expectation(\n Infer({ model() {\n var state = sample(belief);\n var u = utility(state, action);\n if (state.terminateAfterAction) {\n return u;\n } else {\n var nextState = transition(state, action);\n var nextObservation = observe(nextState);\n var nextBelief = updateBelief(belief, nextObservation, action);\n var nextAction = sample(act(nextBelief));\n return u + expectedUtility(nextBelief, nextAction);\n }\n }}));\n });\n\n return { params, act, expectedUtility, updateBelief };\n};\n\nvar simulate = function(startState, agent) {\n var act = agent.act;\n var updateBelief = agent.updateBelief;\n var priorBelief = agent.params.priorBelief;\n\n var sampleSequence = function(state, priorBelief, action) {\n var observation = observe(state);\n var belief = ((action === 'noAction') ? priorBelief : \n updateBelief(priorBelief, observation, action));\n var action = sample(act(belief));\n var output = [[state, action]];\n\n if (state.terminateAfterAction){\n return output;\n } else {\n var nextState = transition(state, action);\n return output.concat(sampleSequence(nextState, belief, action));\n }\n };\n // Start with agent's prior and a special \"null\" action\n return sampleSequence(startState, priorBelief, 'noAction');\n};\n\n\n\n//-----------\n// Construct the agent\n\nvar prizeToUtility = {\n chocolate: 1, \n nothing: 0, \n champagne: 1.5, \n start: 0\n};\n\nvar utility = function(state, action) {\n return prizeToUtility[state.prize];\n};\n\n\n// Define true startState (including true *armToPrize*) and\n// alternate possibility for startState (see Figure 2)\n\nvar numberTrials = 1;\nvar startState = { \n prize: 'start',\n timeLeft: numberTrials + 1, \n terminateAfterAction: false,\n armToPrize: { 0: 'chocolate', 1: 'champagne' }\n};\n\nvar alternateStartState = extend(startState, {\n armToPrize: { 0: 'chocolate', 1: 'nothing' }\n});\n\n// Agent's prior\nvar priorBelief = Categorical({ \n ps: [.5, .5], \n vs: [startState, alternateStartState]\n});\n\n\nvar params = { utility: utility, priorBelief: priorBelief };\nvar agent = makeAgent(params);\nvar trajectory = simulate(startState, agent);\n\nprint('Number of trials: ' + numberTrials);\nprint('Arms pulled: ' + map(second, trajectory));\n~~~~\n\nYou can change the agent's behavior by varying `numberTrials`, `armToPrize` in `startState` or the agent's prior. Note that the agent's final arm pull is random because the agent only gets utility when *leaving* a state.\n\n\n<br>\n\n### Footnotes\n\n", "date_published": "2017-03-19T19:27:27Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3c-pomdp.md"} |
| {"id": "1c92e4186308d1ad03fed592fa57db19", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/1-introduction.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Introduction\ndescription: \"Motivating the problem of modeling human planning and inference using rich computational models.\"\nis_section: true\n---\n\nImagine a dataset that records how individuals move through a city. The figure below shows what a datapoint from this set might look like. It depicts an individual, who we'll call Bob, moving along a street and then stopping at a restaurant. This restaurant is one of two nearby branches of a chain of Donut Stores. Two other nearby restaurants are also shown on the map.\n\n\n\nGiven Bob's movements alone, what can we infer about his preferences and beliefs? Since Bob spent a long time at the Donut Store, we infer that he bought something there. Since Bob could easily have walked to one of the other nearby eateries, we infer that Bob prefers donuts to noodles or salad.\n\nAssuming Bob likes donuts, why didn't he choose the store closer to his starting point (\)? The cause might be Bob's *beliefs* and *knowledge* rather than his *preferences*. Perhaps Bob doesn't know about \ because it just opened. Or perhaps Bob knows about Donut South but chose Donut North because it is open later.\n\nA different explanation is that Bob *intended* to go to the healthier \. However, the most efficient route to the Salad Bar takes him directly past Donut North, and once outside, he found donuts more tempting than salad.\n\nWe have described a variety of inferences about Bob which would explain his behavior. This tutorial develops models for inference that represent these different explanations and allow us to compute which explanations are most plausible. These models can also simulate an agent's behavior in novel scenarios: for example, predicting Bob's behavior if he looked for food in a different part of the city. \n\n<!-- Remove because we don't do hierarchical case\nNow, suppose that our dataset shows that a significant number of different individuals took exactly the same path as Bob. How would this change our conclusions about him? It could be that everyone is tempted away from healthy food in the way Bob potentially was. But this seems unlikely. Instead, it is now more plausible that Donut South is closed or that it is a new branch that few people know about. \n\nThis kind of reasoning, where we make assumptions about the distributions of beliefs within populations, will be formalized and simulated in later chapters. We will also consider multi-agent behavior where coordination or competition become important. \n-->\n\n## Agents as programs\n\n### Making rational plans\n\nFormal models of rational agents play an important role in economics refp:rubinstein2012lecture and in the cognitive sciences refp:chater2003rational as models of human or animal behavior. Core components of such models are *expected-utility maximization*, *Bayesian inference*, and *game-theoretic equilibria*. These ideas are also applied in engineering and in artificial intelligence refp:russell1995modern in order to compute optimal solutions to problems and to construct artificial systems that learn and reason optimally. \n\nThis tutorial implements utility-maximizing Bayesian agents as functional probabilistic programs. These programs provide a concise, intuitive translation of the mathematical specification of rational agents as code. The implemented agents explicitly simulate their own future choices via recursion. They update beliefs by exact or approximate Bayesian inference. They reason about other agents by simulating them (which includes simulating the simulations of others). \n\nThe first section of the tutorial implements agent models for sequential decision problems in stochastic environments. We introduce a program that solves finite-horizon MDPs, then extend it to POMDPs. These agents behave *optimally*, making rational plans given their knowledge of the world. Human behavior, by contrast, is often *sub-optimal*, whether due to irrational behavior or constrained resources. The programs we use to implement optimal agents can, with slight modification, implement agents with biases (e.g. time inconsistency) and with resource bounds (e.g. bounded look-ahead and Monte Carlo sampling).\n\n\n### Learning preferences from behavior\n\nThe example of Bob was not primarily about *simulating* a rational agent, but rather about the problem of *learning* (or *inferring*) an agent's preferences and beliefs from their choices. This problem is important to both economics and psychology. Predicting preferences from past choices is also a major area of applied machine learning; for example, consider the recommendation systems used by Netflix and Facebook.\n\nOne approach to this problem is to assume the agent is a rational utility-maximizer, to assume the environment is an MDP or POMDP, and to infer the utilities and beliefs and predict the observed behavior. This approach is called \ in economics refp:aguirregabiria2010dynamic, \ in cognitive science refp:ullman2009help, and \ (IRL) in machine learning and AI refp:ng2000algorithms. It has been applied to inferring the perceived rewards of education from observed work and education choices, preferences for health outcomes from smoking behavior, and the preferences of a nomadic group over areas of land (see cites in reft:evans2015learning). \n\n[Section IV](/chapters/4-reasoning-about-agents.html) shows how to infer the preferences and beliefs of the agents modeled in earlier chapters. Since the agents are implemented as programs, we can apply probabilistic programming techniques to perform this sort of inference with little additional code. We will make use of both exact Bayesian inference and sampling-based approximations (MCMC and particle filters).\n\n\n## Taster: probabilistic programming\n\nOur models of agents, and the corresponding inferences about agents, all run in \ in the browser, accompanied by animated visualizations of agent behavior. The language of the tutorial is [WebPPL](http://webppl.org), an easy-to-learn probabilistic programming language based on Javascript refp:dippl. As a taster, here are two simple code snippets in WebPPL:\n\n~~~~\n// Using the stochastic function `flip` we build a function that\n// returns 'H' and 'T' with equal probability:\n\nvar coin = function() {\n return flip(.5) ? 'H' : 'T';\n};\n\nvar flips = [coin(), coin(), coin()];\n\nprint('Some coin flips:');\nprint(flips);\n~~~~\n\n~~~~\n// We now use `flip` to define a sampler for the geometric distribution:\n\nvar geometric = function(p) {\n return flip(p) ? 1 + geometric(p) : 1\n};\n\nvar boundedGeometric = Infer({ \n model() { return geometric(0.5); },\n method: 'enumerate', \n maxExecutions: 20 \n});\n\nprint('Histogram of (bounded) Geometric distribution');\nviz(boundedGeometric);\n~~~~\n\nIn the [next chapter](/chapters/2-webppl.html), we will introduce WebPPL in more detail.\ndate_published2017-04-16T22:22:12ZauthorsOwain EvansAndreas StuhlmüllerJohn SalvatierDaniel Filansummariesfilename1-introduction.md |
| id7334e96c06582304d08d94c2f873e6c7titleModeling Agents with Probabilistic Programsurlhttps://agentmodels.org/chapters/8-guide-library.htmlsourceagentmodelssource_typemarkdowntext---\nlayout: chapter\ntitle: Quick-start guide to the webppl-agents library\ndescription: Create your own MDPs and POMDPs. Create gridworlds and k-armed bandits. Use agents from the library and create your own.\nis_section: true\n---\n\n<!--\n## Plan for guide\n\nGoal of the guide is to make it easy for people to use the `webppl-agents` library. It should be self-contained, so that people don't need to go through all of agentmodels.org in order to find the guide useful.\n\nContents:\n\n1. Write an MDP (use line example from Section 3.1) and run MDP and hyperbolic agents. MDP has `transition` and `stateToAction`.\n\n3. Gridworld MDP version. Show hiking example. Show how to vary the utilities. Run different agents on it. Show how to create variant gridworlds (need nicer interface for \"feature\").\n\n4. Show how to create your own agent and run it on gridworld. Random agent. Epsilon-greedy agent instead of softmax.\n\n2. Write a POMDP. Could be line-world also: if state 1 says so, you go right, otherwise you go left. POMDP has `transition`, `beliefToAction`, `observation` functions. The startState will contain the latentState that agent is uncertain about. Work with `beliefDelay` agent to show comparison between optimal and boundVOI. Maybe discuss beliefAgent in footnotes.\n\n5. Bandits. Show how to create bandit problems. Run POMDP agents. Create your own POMDP agent.\n\n-->\n\n### Contents\n\n1. <a href=\"#intro\">Introduction</a>\n\n2. <a href=\"#createMDP\">Creating MDPs</a>\n\n3. <a href=\"#gridworld\">Creating Gridworld MDPs</a>\n\n4. <a href=\"#agents\">Creating your own agents</a>\n\n5. <a href=\"#createPOMDP\">Creating POMDPs</a>\n\n6. Creating k-armed bandits\n\n\n<a id=\"intro\"></a>\n\n### Introduction\n\nThis is a quick-start guide to using the `webppl-agents` library. For a comprehensive explanation of the ideas behind the library (e.g. MDPs, POMDPs, hyperbolic discounting) and diverse examples of its use, go to the online textbook [agentmodels.org](http://agentmodels.org).\n\nThe webppl-agents library is built around two basic entities: *agents* and *environments*. These entities are combined by *simulating* an agent interacting with a particular environment. The library includes two standard RL environments as examples (Gridworld and Multi-armed Bandits). Four kinds of agent are included. Many combinations of environment and agent are possible. In addition, it's easy to add your own environments and agents -- as we illustrate below.\n\nNot all environments and agents can be combined. Among environments, we distinguish MDPs (Markov Decision Processes) and POMDPs (Partially Observable Markov Decision Processes). For a POMDP environment, the agent must be a \, which means they maintain a belief distribution on the state[^separation].\n\n[^separation]: This separation of POMDPs and MDPs is not necessary from a theoretical perspective, since POMDPs generalize MDPs. However, the separation is convenient in practice; it allows the MDP code to be short and perspicuous and it provides performance advantages.\n\n<a id=\></a>\n\n### Creating your own MDP environment\n\nWe begin by creating a very simple MDP environment and running two agents from the library on that environment.\n\nMDPs are defined [here](http://agentmodels.org/chapters/3a-mdp.html). For use in the library, MDP environments are Javascript objects with the following methods:\n\n>`{transition: ..., stateToActions: ...}`\n\nThe `transition` method is a function from state-action pairs to states (as in the function $$T$$ in the MDP definition). The `stateToAction` method is a mapping from states to the actions that are allowed in that state. (This is often a constant function).\n\nTo run an agent on an MDP, the agent object must have a `utility` method defined on the MDP's state-action space. This method is the agent's \ or \ function (we use the terms interchangeably).\n\n#### Creating the Line MDP environment\nOur first MDP environment is a discrete line (or one-dimensional gridworld) where the agent can move left or right (starting from the origin). More precisely, the Line MDP is as follows:\n\n- **States:** Points on the integer line (e.g ..., -1, 0, 1, 2, ...).\n\n- **Actions/transitions:** Actions “left”, “right” and “stay” move the agent deterministically along the line in either direction. We represent the actions as $$[-1,0,1]$$ in the code below.\n\nIn our examples, the agent's `startState` is the origin. The utility is 1 at the origin, 3 at the third state right of the origin (\"state 3\"), and 0 otherwise.\n\nThe transition function must also decrement the time. States are objects with a `terminateAfterAction` property. In the example below, `terminateAfterAction` is set to `true` when the state's `timeLeft` attribute gets down to 1; this causes the MDP to terminate. Here is an example state for the Line MDP (it's also the `startState`):\n\n>`{terminateAfterAction: false, timeLeft:5, loc:0}`\n\n~~~~\n// helper function that decrements time and triggers termination when\n// time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \"line\" MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n\n// save the MDP constructor for use in other codeboxes\nwpEditor.put('makeLineMDP', makeLineMDP);\n~~~~\n\nTo run an agent on this MDP, we use a `makeAgent` constructor and the library function `simulateMDP`. The constructor for MDP agents is `makeMDPAgent`:\n\n>`makeMDPAgent(params, world)`\n\nAgent constructors always have these same two arguments. The `world` argument is required for the agent's internal simulations of possible transitions. The `params` argument specifies the agent's parameters and whether the agent is optimal or biased.\n\nFor an optimal agent, the parameters are:\n\n>`{utility: <utility_function>, alpha: <softmax_alpha>}`\n\nAn environment (or \"world\") and agent are combined with the `simulateMDP` function:\n\n>`simulateMDP(startState, world, agent, outputType)`\n\nGiven the utility function defined above, the highest utility state is at location 3 (three steps to the right from the origin). So an optimal agent (who doesn't hyperbolically discount) will move to this location and stay there.\n\n~~~~\n///fold: helper function that decrements time and triggers termination when time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \ MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n///\n\n// Construct line MDP environment\nvar totalTime = 5;\nvar lineMDP = makeLineMDP(totalTime);\nvar world = lineMDP.world;\n\n// The lineMDP object also includes a utility function and startState\nvar utility = lineMDP.utility;\nvar startState = lineMDP.startState;\n\n\n// Construct MDP agent\nvar params = { alpha: 1000, utility };\nvar agent = makeMDPAgent(params, world);\n\n// Simulate the agent on the lineMDP with *outputType* set to *states*\nvar trajectory = simulateMDP(startState, world, agent, 'states');\n\n// Display start state\nprint(trajectory);\n~~~~\n\nWe described the agent above as \ because it does not hyperbolically discount and it is not myopic. However, we can adjust its \ noise by modifying the parameter `alpha` and induce sub-optimal behavior. Moreover, we can change the agent's behavior on this MDP by over-writing the utility method in `params`.\n\nTo construct a time-inconsistent, hyperbolically-discounting MDP agent, we include additional attributes in the `params` argument:\n\n>`{ discount:<discount_parameter>, sophisticatedOrNaive: <boolean> }`\n\nThese attributes are explained in the [chapter](/chapters/5a-time-inconsistency.html) on hyperbolic discounting. The discounting agent stays at the origin because it isn't willing to \ in order to get a larger total reward at location 3.\n\n~~~~\n///fold: helper function that decrements time and triggers termination when time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \ MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n///\n\n// Construct line MDP environment\nvar totalTime = 5;\nvar lineMDP = makeLineMDP(totalTime);\nvar world = lineMDP.world;\n\n// The lineMDP object also includes a utility function and startState\nvar utility = lineMDP.utility;\nvar startState = lineMDP.startState;\n\n// Construct hyperbolic agent\nvar params = {\n alpha: 1000,\n utility,\n discount: 2,\n sophisticatedOrNaive: 'naive'\n};\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\nWe've shown how to create your own MDP and then run different agents on that MDP. You can also create your own MDP agent, as we illustrate below.\n\n>**Exercise:** Try some variations of the Line MDP by modifying the `transition` method in the `makeLineMDP` constructor above. For example, change the underlying graph structure from a line into a loop.\n\n-----------\n\n<a id=\"gridworld\"></a>\n\n### Creating Gridworld MDPs\n\nGridworld is a standard toy environment for reinforcement learning problems. The library contains a constructor for making a gridworld with your choice of dimensions and reward function. There is also a function for displaying gridworlds in the browser.\n\nWe begin by creating a simple gridworld environment (using `makeGridWorldMDP`) and display it using `viz.gridworld`.\n\n~~~~\n// Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n var grid = [\n [___, ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\nviz.gridworld(world, {trajectory: [startState]});\n~~~~\n\nGridworld states have a `loc` attribute for the agent's location (using discrete Cartesian coordinates). The agent is able to move up, down, left and right but is not able to stay put.\n\nHaving created a gridworld, we construct a utility function (where utility depends only on the agent's grid location) and simulate an optimal MDP agent.\n\n~~~~\n///fold: Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n var grid = [\n [___, ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0,0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n///\n\n// `isEqual` is in *underscore* (included in webppl-agents)\nvar utility = function(state, action) {\n return _.isEqual(state.loc, [0, 3]) ? 1 : 0;\n};\n\nvar params = { utility, alpha: 1000 };\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent);\nviz.gridworld(world, {trajectory: trajectory});\n~~~~\n\nYou can create terminal gridworld states by using features with a name. These named-features can also be used to create a utility function without specifying grid coordinates.\n\n~~~~\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n\nvar params = { utility, alpha: 1000 };\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent);\nviz.gridworld(world, { trajectory });\n~~~~\n\nThere are many examples using gridworld in agentmodels.org, starting from this [chapter](/chapters/3b-mdp-gridworld.html).\n\n\n-------\n\n<a id=\"agents\"></a>\n\n### Creating your own agents\n\nAs well as creating your own environments, it is straightfoward to create your own agents for MDPs and POMDPs. Much of agentmodels.org is a tutorial on creating agents (e.g. optimal agents, myopic agents, etc.). Rather than recapitulate agentmodels.org, this section is brief and focuses on the basic interface that agents need to present.\n\nWe begin by creating an agent that chooses actions uniformly at random. To run on agent on an environment using the `simulateMDP` function, an agent object must have an `act` method and a `params` attribute. The `act` method is a function from states to a distribution on the available actions. The `params` attribute indicates whether or not the agent is an MDP or POMDP agent.\n\nWe use the simple gridworld environment from the codebox above.\n\n~~~~\n///fold: Build gridworld environment\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n///\n\nvar actions = ['u', 'd', 'l', 'r'];\n\nvar act = function(state){\n return Infer({ model(){ return uniformDraw(actions); }});\n};\n\nvar randomAgent = { act, params: {} };\nvar trajectory = simulateMDP(startState, world, randomAgent);\nviz.gridworld(world, { trajectory });\n~~~~\n\nIn gridworld the same actions are available in each state. When the actions available depend on the state, the agent's `act` function needs access to the environment's `stateToActions` method.\n\n~~~~\n///fold: Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n///\n\nvar makeRandomAgent = function(world) {\n var stateToActions = world.stateToActions;\n var act = function(state) {\n return Infer({ model() {\n return uniformDraw(stateToActions(state));\n }});\n };\n return { act, params: {} };\n};\n\nvar randomAgent = makeRandomAgent(world);\nvar trajectory = simulateMDP(startState, world, randomAgent);\n\nviz.gridworld(world, { trajectory });\n~~~~\n\nIn the example above, the agent constructor `makeRandomAgent` takes the environment (`world`) as an argument in order to access `stateToActions`. Agent constructors will typically also use the environment's `transition` method to internally simulate state transitions.\n\n>**Exercise:** Implement an agent who takes the action with highest expected utility under the random policy. (You can do this by making use of the codebox above. Use the `makeRandomAgent` and `simulateMDP` function within a new agent constructor.)\n\nIn addition to writing agents from scratch, you can build on the agents available in the library.\n\n>**Exercise:** Start with the optimal MDP agent found [here](https://github.com/agentmodels/webppl-agents/blob/master/src/agents/makeMDPAgent.wppl#L3). Create a variant of this optimal agent that takes \ random actions instead of softmax random actions.\n\n--------\n\n<a id=\></a>\n\n### Creating POMDPs\n\nPOMDPs are introduced in agentmodels.org in this [chapter](/chapters/3c-pomdp.html). This section explains how to create your own POMDPs for use in the library.\n\nAs we explained above, MDPs in webppl-agents are objects with a `transition` method and a `stateToActions` method. POMDPs also have a `transition` method. Instead of `stateToActions`, they have a `beliefToActions` method, which maps a belief distribution over states to a set of available actions. POMDPs also have an `observe` method, which maps states to observations (typically represented as strings).\n\nHere is a simple POMDP based on the \ example above. The agent moves along the integer line as before. This time the agent is uncertain whether or not there is high reward at location 3. The agent can only find out by moving to location 3 and receiving an observation.\n\n~~~~\n// States have the same structure as in MDPs:\n// the transition method needs to decrement\n// the state's *timeLeft* attribute until termination\n\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n\nvar makeLinePOMDP = function() {\n\n var beliefToActions = function(belief){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var observe = function(state) {\n if (state.loc == 3) {\n return state.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { beliefToActions, transition, observe };\n\n};\n~~~~\n\nTo simulate an agent on this POMDP, we need to create a \"POMDP\" agent. POMDP agents have an `act` method which maps *beliefs* (rather than *states*) to distributions on actions. They also have an `updateBelief` method, mapping beliefs and observations to an updated belief.\n\nThis example uses the optimal POMDP agent. To construct a POMDP agent, we need to specify the agent's starting belief distribution on states. Here we assume the agent has a uniform distribution on whether or not there is \ at location 3.\n\n~~~~\n///fold:\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\nvar makeLinePOMDP = function() {\n\n var beliefToActions = function(belief){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var observe = function(state) {\n if (state.loc == 3) {\n return state.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { beliefToActions, transition, observe };\n\n};\n///\n\nvar utility = function(state, action) {\n if (state.loc==3 && state.treasureAt3){ return 5; }\n if (state.loc==0){ return 1; }\n return 0;\n};\n\nvar trueStartState = {\n timeLeft: 7,\n terminateAfterAction: false,\n loc: 0,\n treasureAt3: false\n};\n\nvar alternativeStartState = extend(trueStartState, {treasureAt3: true});\nvar possibleStates = [trueStartState, alternativeStartState];\n\nvar priorBelief = Categorical({\n vs: possibleStates,\n ps: [.5, .5]\n});\n\nvar params = {\n alpha: 1000,\n utility,\n priorBelief,\n optimal: true\n};\n\nvar world = makeLinePOMDP();\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(trueStartState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\nIn POMDPs the agent does not directly observe their current state. However, in the Line POMDP (above) the \ part of the agent's state is always known by the agent. The part of the state that is unknown is whether `treasureAt3` is true. So we could factor the state into attributes that are always known (\"manifest\") and parts that are not (\"latent\"). This factoring of the state can speed up the POMDP agent's belief-updating and is used for the POMDP environments in the library. The following codebox shows a factored version of the Line POMDP:\n\n~~~~\n///fold:\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n///\n\nvar makeLinePOMDP = function() {\n var manifestStateToActions = function(manifestState){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.manifestState.loc + action;\n var manifestStateNewLoc = extend(state.manifestState,{loc: newLoc});\n var newManifestState = advanceStateTime(manifestStateNewLoc);\n return {\n manifestState: newManifestState,\n latentState: state.latentState\n };\n };\n\n var observe = function(state) {\n if (state.manifestState.loc == 3){\n return state.latentState.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { manifestStateToActions, transition, observe};\n};\n\n\nvar utility = function(state, action) {\n if (state.manifestState.loc==3 && state.latentState.treasureAt3){ return 5; }\n if (state.manifestState.loc==0){ return 1; }\n return 0;\n};\n\nvar trueStartState = {\n manifestState: {\n timeLeft: 7,\n terminateAfterAction: false,\n loc: 0\n },\n latentState: {\n treasureAt3: false\n }\n};\n\nvar alternativeStartState = extend(trueStartState, {\n latentState: { treasureAt3: true }\n});\nvar possibleStates = [trueStartState, alternativeStartState];\n\nvar priorBelief = Categorical({\n vs: possibleStates,\n ps: [.5, .5]\n});\n\nvar params = {\n alpha: 1000,\n utility,\n priorBelief,\n optimal: true\n};\n\nvar world = makeLinePOMDP();\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(trueStartState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\n\n\n---------\n\n\n### Footnotes\ndate_published2016-12-04T12:14:18ZauthorsOwain EvansAndreas StuhlmüllerJohn SalvatierDaniel Filansummariesfilename8-guide-library.md |
| idf8263956785bbfdfbafe1c3994d28c97titleModeling Agents with Probabilistic Programsurlhttps://agentmodels.org/chapters/2-webppl.htmlsourceagentmodelssource_typemarkdowntext---\nlayout: chapter\ntitle: \\ndescription: \\nis_section: true\n---\n\n## Introduction\n\nThis chapter introduces the probabilistic programming language WebPPL (pronounced \). The models for agents in this tutorial are all implemented in WebPPL and so it's important to understand how the language works.\n\nWe begin with a quick overview of probabilistic programming. If you are new to probabilistic programming, you might want to read an informal introduction (e.g. [here](http://www.pl-enthusiast.net/2014/09/08/probabilistic-programming/) or [here](https://moalquraishi.wordpress.com/2015/03/29/the-state-of-probabilistic-programming/)) or a more technical [survey](https://scholar.google.com/scholar?cluster=16211748064980449900&hl=en&as_sdt=0,5). For a practical introduction to both probabilistic programming and Bayesian modeling, we highly recommend [ProbMods](http://probmods.org), which also uses the WebPPL language. \n\nThe only requirement to run the code for this tutorial is a modern browser (e.g. Chrome, Firefox, Safari). If you want to explore the models in detail and to create your own, we recommend running WebPPL from the command line. Installation is simple and is explained [here](http://webppl.org).\n\n\n## WebPPL: a purely functional subset of Javascript\n\nWebPPL includes a subset of Javascript, and follows the syntax of Javascript for this subset.\n\nThis example program uses most of the Javascript syntax that is available in WebPPL:\n\n~~~~\n// Define a function using two external primitives:\n// 1. Javascript's `JSON.stringify` for converting to strings\n// 2. Underscore's _.isFinite for checking if a value is a finite number\nvar coerceToPositiveNumber = function(x) {\n if (_.isFinite(x) && x > 0) {\n return x;\n } else {\n print('- Input ' + JSON.stringify(x) +\n ' was not a positive number, returning 1 instead');\n return 1;\n }\n};\n\n// Create an array with numbers, an object, an a Boolean\nvar inputs = [2, 3.5, -1, { key: 1 }, true];\n\n// Map the function over the array\nprint('Processing elements in array ' + JSON.stringify(inputs) + '...');\nvar result = map(coerceToPositiveNumber, inputs);\nprint('Result: ' + JSON.stringify(result));\n~~~~\n\nLanguage features with side effects are not allowed in WebPPL. The code that has been commented out uses assignment to update a table. This produces an error in WebPPL.\n\n~~~~\n// Don't do this:\n\n// var table = {};\n// table.key = 1;\n// table.key = table.key + 1;\n// => Syntax error: You tried to assign to a field of table, but you can\n// only assign to fields of globalStore\n\n\n// Instead do this:\n\nvar table = { key: 1 };\nvar tableTwo = { key: table.key + 1 };\nprint(tableTwo);\n\n// Or use the library function `extend`:\n\nvar tableThree = extend(tableTwo, { key: 3 })\nprint(tableThree);\n~~~~\n\nThere are no `for` or `while` loops. Instead, use higher-order functions like WebPPL's built-in `map`, `filter` and `zip`:\n\n~~~~\nvar xs = [1, 2, 3];\n\n// Don't do this:\n\n// for (var i = 0; i < xs.length; i++){\n// print(xs[i]);\n// }\n\n\n// Instead of for-loop, use `map`:\nmap(print, xs);\n\n\\n~~~~\n\nIt is possible to use normal Javascript functions (which make *internal* use of side effects) in WebPPL. See the [online book](http://dippl.org/chapters/02-webppl.html) on the implementation of WebPPL for details (section \).\n\n\n## WebPPL stochastic primitives\n\n### Sampling from random variables\n\nWebPPL has a large [library](http://docs.webppl.org/en/master/distributions.html) of primitive probability distributions. Try clicking \ repeatedly to get different i.i.d. random samples:\n\n~~~~\nprint('Fair coins (Bernoulli distribution):');\nprint([flip(0.5), flip(0.5), flip(0.5)]);\n\nprint('Biased coins (Bernoulli distribution):');\nprint([flip(0.9), flip(0.9), flip(0.9)]);\n\nvar coinWithSide = function(){\n return categorical([.45, .45, .1], ['heads', 'tails', 'side']);\n};\n\nprint('Coins that can land on their edge:')\nprint(repeat(5, coinWithSide)); // draw 5 i.i.d samples\n~~~~\n\nThere are also continuous random variables:\n\n~~~~\nprint('Two samples from standard Gaussian in 1D: ');\nprint([gaussian(0, 1), gaussian(0, 1)]);\n\nprint('A single sample from a 2D Gaussian: ');\nprint(multivariateGaussian(Vector([0, 0]), Matrix([[1, 0], [0, 10]])));\n~~~~\n\nYou can write your own functions to sample from more complex distributions. This example uses recursion to define a sampler for the Geometric distribution:\n\n~~~~\nvar geometric = function(p) {\n return flip(p) ? 1 + geometric(p) : 1\n};\n\ngeometric(0.8);\n~~~~\n\nWhat makes WebPPL different from conventional programming languages is its ability to perform *inference* operations using these primitive probability distributions. Distribution objects in WebPPL have two key features:\n\n1. You can draw *random i.i.d. samples* from a distribution using the special function `sample`. That is, you sample $$x \\sim P$$ where $$P(x)$$ is the distribution.\n\n2. You can compute the probability (or density) the distribution assigns to a value. That is, to compute $$\\log(P(x))$$, you use `dist.score(x)`, where `dist` is the distribution in WebPPL.\n\nThe functions above that generate random samples are defined in the WebPPL library in terms of primitive distributions (e.g. `Bernoulli` for `flip` and `Gaussian` for `gaussian`) and the built-in function `sample`:\n\n~~~~\nvar flip = function(p) {\n var p = (p !== undefined) ? p : 0.5;\n return sample(Bernoulli({ p }));\n};\n\nvar gaussian = function(mu, sigma) {\n return sample(Gaussian({ mu, sigma }));\n};\n\n[flip(), gaussian(1, 1)];\n~~~~\n\nTo create a new distribution, we pass a (potentially stochastic) function with no arguments---a *thunk*---to the function `Infer` that performs *marginalization*. For example, we can use `flip` as an ingredient to construct a Binomial distribution using enumeration:\n\n~~~~\nvar binomial = function() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n return a + b + c;\n};\n\nvar MyBinomial = Infer({ model: binomial });\n\n[sample(MyBinomial), sample(MyBinomial), sample(MyBinomial)];\n~~~~\n\n`Infer` is the *inference operator* that computes (or estimates) the marginal probability of each possible output of the function `binomial`. If no explicit inference method is specified, `Infer` defaults to enumerating each possible value of each random variable in the function body.\n\n### Bayesian inference by conditioning\n\nThe most important use of inference methods is for Bayesian inference. Here, our task is to *infer* the value of some unknown parameter by observing data that depends on the parameter. For example, if flipping three separate coins produce exactly two Heads, what is the probability that the first coin landed Heads? To solve this in WebPPL, we can use `Infer` to enumerate all values for the random variables `a`, `b` and `c`. We use `condition` to constrain the sum of the variables. The result is a distribution representing the posterior distribution on the first variable `a` having value `true` (i.e. \).\n\n~~~~\nvar twoHeads = Infer({\n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n condition(a + b + c === 2);\n return a;\n }\n});\n\nprint('Probability of first coin being Heads (given exactly two Heads) : ');\nprint(Math.exp(twoHeads.score(true)));\n\nvar moreThanTwoHeads = Infer({\n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n condition(a + b + c >= 2);\n return a;\n }\n});\n\nprint('\\Probability of first coin being Heads (given at least two Heads): ');\nprint(Math.exp(moreThanTwoHeads.score(true)));\n~~~~\n\n### Codeboxes and Plotting\n\nThe codeboxes allow you to modify our examples and to write your own WebPPL code. Code is not shared between boxes. You can use the special function `viz` to plot distributions:\n\n~~~~\nvar appleOrangeDist = Infer({\n model() {\n return flip(0.9) ? 'apple' : 'orange';\n }\n});\n\nviz(appleOrangeDist);\n~~~~\n\n~~~~\nvar fruitTasteDist = Infer({\n model() {\n return {\n fruit: categorical([0.3, 0.3, 0.4], ['apple', 'banana', 'orange']),\n tasty: flip(0.7)\n };\n }\n});\n\nviz(fruitTasteDist);\n~~~~\n\n~~~~\nvar positionDist = Infer({\n model() {\n return {\n X: gaussian(0, 1),\n Y: gaussian(0, 1)};\n },\n method: 'forward',\n samples: 1000\n});\n\nviz(positionDist);\n~~~~\n\n### Next\n\nIn the [next chapter](/chapters/3-agents-as-programs.html), we will implement rational decision-making using inference functions.\ndate_published2017-03-19T18:54:16ZauthorsOwain EvansAndreas StuhlmüllerJohn SalvatierDaniel Filansummariesfilename2-webppl.md |
| idd62bdfd1eaa50efa1f628ecdc10da0dftitleModeling Agents with Probabilistic Programsurlhttps://agentmodels.org/chapters/4-reasoning-about-agents.htmlsourceagentmodelssource_typemarkdowntext---\nlayout: chapter\ntitle: Reasoning about agents\ndescription: Overview of Inverse Reinforcement Learning. Inferring utilities and beliefs from choices in Gridworld and Bandits.\nis_section: true\n---\n\n\n## Introduction\nThe previous chapters have shown how to compute optimal actions for agents in MDPs and POMDPs. In many practical applications, this is the goal. For example, when controlling a robot, the goal is for the robot to act optimally given its utility function. When playing the stock market or poker, the goal is make money and one might use an approach based on the POMDP agent model from the [previous chapter](/chapters/3c-pomdp).\n\nIn other settings, however, the goal is to *learn* or *reason about* an agent based on their behavior. For example, in social science or psychology researchers often seek to learn about people's preferences (denoted $$U$$) and beliefs (denoted $$b$$). The relevant *data* (denoted $$\\{a_i\\}$$) are usually observations of human actions. In this situation, models of optimal action can be used as *generative models* of human actions. The generative model predicts the behavior *given* preferences and beliefs. That is:\n\n$$\nP( \\{a_i\\} \\vert U, b) =: \\text{Generative model of optimal action}\n$$\n\nStatistical inference infers the preferences $$U$$ and beliefs $$b$$ *given* the observed actions $$\\{a_i\\}$$. That is:\n\n$$\nP( U, b \\vert \\{a_i\\}) =: \\text{Invert generative model via statistical inference}\n$$\n\nThis approach, using generative models of sequential decision making, has been used to learn preferences and beliefs about education, work, health, and many other topics[^generative].\n\n[^generative]: The approach in economics closest to the one we outline here (with models of action based on sequential decision making) is called \"Structural Estimation\". Some particular examples are reft:aguirregabiria2010dynamic and reft:darden2010smoking. A related piece of work in AI or computational social science is reft:ermon2014learning.\n\nAgent models are also used as generative models in Machine Learning, under the label \"Inverse Reinforcement Learning\" (IRL). One motivation for learning human preferences and beliefs is to give humans helpful recommendations (e.g. for products they are likely to enjoy). A different goal is to build systems that mimic human expert performance. For some tasks, it is hard for humans to directly specify a utility/reward function that is both correct and that can be tractably optimized. An alternative is to *learn* the human's utility function by watching them perform the task. Once learned, the system can use standard RL techniques to optimize the function. This has been applied to building systems to park cars, to fly helicopters, to control human-like bots in videogames, and to play table tennis[^inverse].\n\n[^inverse]: The relevant papers on applications of IRL: parking cars in reft:abbeel2008apprenticeship, flying helicopters in reft:abbeel2010autonomous, controlling videogame bots in reft:lee2010learning, and table tennis in reft:muelling2014learning.\n\nThis chapter provides an array of illustrative examples of learning about agents from their actions. We begin with a concrete example and then provide a general formalization of the inference problem. A virtue of using WebPPL is that doing inference over our existing agent models requires very little extra code.\n\n\n## Learning about an agent from their actions: motivating example\n\nConsider the MDP version of Bob's Restaurant Choice problem. Bob is choosing between restaurants, all restaurants are open (and Bob knows this), and Bob also knows the street layout. Previously, we discussed how to compute optimal behavior *given* Bob's utility function over restaurants. Now we infer Bob's utility function *given* observations of the behavior in the codebox:\n\n~~~~\n///fold: restaurant constants, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar world = makeGridWorldMDP({ grid, start: [3, 1] }).world;\n\nviz.gridworld(world, { trajectory: donutSouthTrajectory });\n~~~~\n\nFrom Bob's actions, we infer that he probably prefers the Donut Store to the other restaurants. An alternative explanation is that Bob cares most about saving time. He might prefer Veg (the Vegetarian Cafe) but his preference is not strong enough to spend extra time getting there.\n\nIn this first example of inference, Bob's preference for saving time is held fixed and we infer (given the actions shown above) Bob's preference for the different restaurants. We model Bob using the MDP agent model from [Chapter 3.1](/chapters/3a-mdp.html). We place a uniform prior over three possible utility functions for Bob: one favoring Donut, one favoring Veg and one favoring Noodle. We compute a Bayesian posterior over these utility functions *given* Bob's observed behavior. Since the world is practically deterministic (with softmax parameter $$\\alpha$$ set high), we just compare Bob's predicted states under each utility function to the states actually observed. To predict Bob's states for each utility function, we use the function `simulate` from [Chapter 3.1](/chapters/3a-mdp.html).\n\n~~~~\n///fold: restaurant constants, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\nvar makeUtilityFunction = mdp.makeUtilityFunction;\nvar world = mdp.world;\n\nvar startState = donutSouthTrajectory[0][0];\n\nvar utilityTablePrior = function() {\n var baseUtilityTable = {\n 'Donut S': 1,\n 'Donut N': 1,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.04\n };\n return uniformDraw(\n [{ table: extend(baseUtilityTable, { 'Donut N': 2, 'Donut S': 2 }),\n favourite: 'donut' },\n { table: extend(baseUtilityTable, { Veg: 2 }),\n favourite: 'veg' },\n { table: extend(baseUtilityTable, { Noodle: 2 }),\n favourite: 'noodle' }]\n );\n};\n\nvar posterior = Infer({ model() {\n var utilityTableAndFavourite = utilityTablePrior();\n var utilityTable = utilityTableAndFavourite.table;\n var favourite = utilityTableAndFavourite.favourite;\n\n var utility = makeUtilityFunction(utilityTable);\n var params = {\n utility,\n alpha: 2\n };\n var agent = makeMDPAgent(params, world);\n\n var predictedStateAction = simulateMDP(startState, world, agent, 'stateAction');\n condition(_.isEqual(donutSouthTrajectory, predictedStateAction));\n return { favourite };\n}});\n\nviz(posterior);\n~~~~\n\n## Learning about an agent from their actions: formalization\n\nWe will now formalize the kind of inference in the previous example. We begin by considering inference over the utilities and softmax noise parameter for an MDP agent. Later on we'll generalize to POMDP agents and to other agents.\n\nFollowing [Chapter 3.1](/chapters/3a-mdp.html) the MDP agent is defined by a utility function $$U$$ and softmax parameter $$\\alpha$$. In order to do inference, we need to know the agent's starting state $$s_0$$ (which might include both their *location* and their *time horizon* $$N$$). The data we condition on is a sequence of state-action pairs:\n\n$$\n(s_0, a_0), (s_1, a_1), \\ldots, (s_n, a_n)\n$$\n\nThe index for the final timestep is less than or equal to the time horzion: $$n \\leq N$$. We abbreviate this sequence as $$(s,a)_{0:n}$$. The joint posterior on the agent's utilities and noise given the observed state-action sequence is:\n\n$$\nP(U,\\alpha | (s,a)_{0:n}) \\propto P( {(s,a)}_{0:n} | U, \\alpha) P(U, \\alpha)\n$$\n\nwhere the likelihood function $$P( {(s,a)}_{0:n} \\vert U, \\alpha )$$ is the MDP agent model (for simplicity we omit information about the starting state). Due to the Markov Assumption for MDPs, the probability of an agent's action in a state is independent of the agent's previous or later actions (given $$U$$ and $$\\alpha$$). This allows us to rewrite the posterior as **Equation (1)**:\n\n$$\nP(U,\\alpha | (s,a)_{0:n}) \\propto P(U, \\alpha) \\prod_{i=0}^n P( a_i | s_i, U, \\alpha)\n$$\n\n\nThe term $$P( a_i \\vert s_i, U, \\alpha)$$ can be rewritten as the softmax choice function (which corresponds to the function `act` in our MDP agent models). This equation holds for the case where we observe a sequence of actions from timestep $$0$$ to $$n \\leq N$$ (with no gaps). This tutorial focuses mostly on this case. It is trivial to extend the equation to observing multiple independently drawn such sequences (as we show below). However, if there are gaps in the sequence or if we observe only the agent's states (not the actions), then we need to marginalize over actions that were unobserved.\n\n\n## Examples of learning about agents in MDPs\n\n### Example: Inference from part of a sequence of actions\n\nThe expression for the joint posterior (above) shows that it is straightforward to do inference on a part of an agent's action sequence. For example, if we know an agent had a time horizon $$N=11$$, we can do inference from only the agent's first few actions.\n\nFor this example we condition on the agent making a single step from $$[3,1]$$ to $$[2,1]$$ by moving left. For an agent with low noise, this already provides very strong evidence about the agent's preferences -- not much is added by seeing the agent go all the way to Donut South.\n\n<!-- show_single_step_trajectory -->\n~~~~\n///fold: restaurant constants\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar world = makeGridWorldMDP({ grid }).world;\n\nvar trajectory = [\n {\n loc: [3, 1],\n timeLeft: 11,\n terminateAfterAction: false\n },\n {\n loc: [2, 1],\n timeLeft: 10,\n terminateAfterAction: false\n }\n];\n\nviz.gridworld(world, { trajectory });\n~~~~\n\nOur approach to inference is slightly different than in the example at the start of this chapter. The approach is a direct translation of the expression for the posterior in Equation (1) above. For each observed state-action pair, we compute the likelihood of the agent (with given $$U$$) choosing that action in the state. In contrast, the simple approach above becomes intractable for long, noisy action sequences -- as it will need to loop over all possible sequences.\n\n<!-- infer_from_single_step_trajectory -->\n~~~~\n///fold: create restaurant choice MDP\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\nvar utilityTablePrior = function(){\n var baseUtilityTable = {\n 'Donut S': 1,\n 'Donut N': 1,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.04\n };\n return uniformDraw(\n [{ table: extend(baseUtilityTable, { 'Donut N': 2, 'Donut S': 2 }),\n favourite: 'donut' },\n { table: extend(baseUtilityTable, { 'Veg': 2 }),\n favourite: 'veg' },\n { table: extend(baseUtilityTable, { 'Noodle': 2 }),\n favourite: 'noodle' }]\n );\n};\n\nvar observedTrajectory = [[{\n loc: [3, 1],\n timeLeft: 11,\n terminateAfterAction: false\n}, 'l']];\n\nvar posterior = Infer({ model() {\n var utilityTableAndFavourite = utilityTablePrior();\n var utilityTable = utilityTableAndFavourite.table;\n var utility = makeUtilityFunction(utilityTable);\n var favourite = utilityTableAndFavourite.favourite;\n\n var agent = makeMDPAgent({ utility, alpha: 2 }, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1];\n observe(act(state), action);\n },\n observedTrajectory);\n\n return { favourite };\n}});\n\nviz(posterior);\n~~~~\n\nNote that utility functions where Veg or Noodle are most preferred have almost the same posterior probability. Since they had the same prior, this means that we haven't received evidence about which the agent prefers. Moreover, assuming the agent's `timeCost` is negligible, then no matter where the agent above starts out on the grid, they choose Donut North or South. So we never get any information about whether they prefer the Vegetarian Cafe or Noodle Shop!\n\nActually, this is not quite right. If we wait long enough, the agent's softmax noise would eventually reveal information about which was preferred. However, we still won't be able to *efficiently* learn the agent's preferences by repeatedly watching them choose from a random start point. If there is no softmax noise, then we can make the stronger claim that even in the limit of arbitrarily many repeated i.i.d. observations, the agent's preferences are not *identified* by draws from this space of scenarios.\n\nUnidentifiability is a frequent problem when inferring an agent's beliefs or utilities from realistic datasets. First, agents with low noise reliably avoid inferior states (as in the present example) and so their actions provide little information about the relative utilities among the inferior states. Second, using richer agent models means there are more possible explanations of the same behavior. For example, agents with high softmax noise or with false beliefs might go to a restaurant even if they don't prefer it. One general approach to the problem of unidentifiability in IRL is **active learning**. Instead of passively observing the agent's actions, you select a sequence of environments that will be maximally informative about the agent's preferences. For recent work covering both the nature of unidentifiability in IRL as well as the active learning approach, see reft:amin2016towards.\n\n### Example: Inferring The Cost of Time and Softmax Noise\n\nThe previous examples assumed that the agent's `timeCost` (the negative utility of each timestep before the agent reaches a restaurant) and the softmax $$\\alpha$$ were known. We can modify the above example to include them in inference.\n\n~~~~\n// infer_utilities_timeCost_softmax_noise\n///fold: create restaurant choice MDP, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar vegDirectTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"u\"],\n [{\"loc\":[3,6],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5]},\"r\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[3,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":4,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n\n// Priors\n\nvar utilityTablePrior = function() {\n var foodValues = [0, 1, 2];\n var timeCostValues = [-0.1, -0.3, -0.6];\n var donut = uniformDraw(foodValues);\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': uniformDraw(foodValues),\n 'Noodle': uniformDraw(foodValues),\n 'timeCost': uniformDraw(timeCostValues)\n };\n};\n\nvar alphaPrior = function(){\n return uniformDraw([.1, 1, 10, 100]);\n};\n\n\n// Condition on observed trajectory\n\nvar posterior = function(observedTrajectory){\n return Infer({ model() {\n var utilityTable = utilityTablePrior();\n var alpha = alphaPrior();\n var params = {\n utility: makeUtilityFunction(utilityTable),\n alpha\n };\n var agent = makeMDPAgent(params, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1]\n observe(act(state), action);\n },\n observedTrajectory);\n\n // Compute whether Donut is preferred to Veg and Noodle\n var donut = utilityTable['Donut N'];\n var donutFavorite = (\n donut > utilityTable.Veg &&\n donut > utilityTable.Noodle);\n\n return {\n donutFavorite,\n alpha: alpha.toString(),\n timeCost: utilityTable.timeCost.toString()\n };\n }});\n};\n\nprint('Prior:');\nvar prior = posterior([]);\nviz.marginals(prior);\n\nprint('Conditioning on one action:');\nvar posterior = posterior(donutSouthTrajectory.slice(0, 1));\nviz.marginals(posterior);\n~~~~\n\n<!-- TODO: plot prior and posterior on same axes -->\n\nThe posterior shows that taking a step towards Donut South can now be explained in terms of a high `timeCost`. If the agent has a low value for $$\\alpha$$, this step to the left is fairly likely even if the agent prefers Noodle or Veg. So including softmax noise in the inference makes inferences about other parameters closer to the prior.\n\n>**Exercise:** Suppose the agent is observed going all the way to Veg. What would the posteriors on $$\\alpha$$ and `timeCost` look like? Check your answer by conditioning on the state-action sequence `vegDirectTrajectory`. You will need to modify other parts of the codebox above to make this work.\n\nAs we noted previously, it is simple to extend our approach to inference to conditioning on multiple sequences of actions. Consider the two sequences below:\n\n<!-- display_multiple_trajectories -->\n~~~~\n///fold: make restaurant choice MDP, naiveTrajectory, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar world = mdp.world;;\n\nmap(function(trajectory) { viz.gridworld(world, { trajectory }); },\n [naiveTrajectory, donutSouthTrajectory]);\n~~~~\n\nTo perform inference, we just condition on both sequences. (We use concatenation but we could have taken the union of all state-action pairs).\n\n<!-- infer_from_multiple_trajectories -->\n~~~~\n///fold: World and agent are exactly as above\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n\n// Priors\n\nvar utilityTablePrior = function() {\n var foodValues = [0, 1, 2];\n var timeCostValues = [-0.1, -0.3, -0.6];\n var donut = uniformDraw(foodValues);\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': uniformDraw(foodValues),\n 'Noodle': uniformDraw(foodValues),\n 'timeCost': uniformDraw(timeCostValues)\n };\n};\n\nvar alphaPrior = function(){\n return uniformDraw([.1, 1, 10, 100]);\n};\n\n\n// Condition on observed trajectory\n\nvar posterior = function(observedTrajectory){\n return Infer({ model() {\n var utilityTable = utilityTablePrior();\n var alpha = alphaPrior();\n var params = {\n utility: makeUtilityFunction(utilityTable),\n alpha\n };\n var agent = makeMDPAgent(params, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1]\n observe(act(state), action);\n },\n observedTrajectory);\n\n // Compute whether Donut is preferred to Veg and Noodle\n var donut = utilityTable['Donut N'];\n var donutFavorite = (\n donut > utilityTable.Veg &&\n donut > utilityTable.Noodle);\n\n return {\n donutFavorite,\n alpha: alpha.toString(),\n timeCost: utilityTable.timeCost.toString()\n };\n }});\n};\n\n///\nprint('Prior:');\nvar prior = posterior([]);\nviz.marginals(prior);\n\nprint('Posterior');\nvar posterior = posterior(naiveTrajectory.concat(donutSouthTrajectory));\nviz.marginals(posterior);\n~~~~\n<!-- TODO: plot prior and posterior on same axes -->\n\n\n## Learning about agents in POMDPs\n\n### Formalization\n\nWe can extend our approach to inference to deal with agents that solve POMDPs. One approach to inference is simply to generate full state-action sequences and compare them to the observed data. As we mentioned above, this approach becomes intractable in cases where noise (in transitions and actions) is high and sequences are long.\n\nInstead, we extend the approach in Equation (1) above. The first thing to notice is that Equation (1) has to be amended for POMDPs. In an MDP, actions are conditionally independent given the agent's parameters $$U$$ and $$\\alpha$$ and the state. For any pair of actions $$a_{i}$$ and $$a_j$$ and state $$s_i$$:\n\n$$\nP(a_i \\vert a_j, s_i, U,\\alpha) = P(a_i \\vert s_i, U,\\alpha)\n$$\n\nIn a POMDP, actions are only rendered conditionally independent if we also condition on the agent's *belief*. So Equation (1) can only be extended to the case where we know the agent's belief at each timestep. This will be realistic in some applications and not others. It depends on whether the agent's *observations* are part of the data that is conditioned on. If so, the agent's belief can be computed at each timestep (assuming the agent's initial belief is known). If not, we have to marginalize over the possible observations, making for a more complex inference computation.\n\nHere is the extension of Equation (1) to the POMDP case, where we assume access to the agent's observations. <a id=\></a>Our goal is to compute a posterior on the parameters of the agent. These include $$U$$ and $$\\alpha$$ as before but also the agent's initial belief $$b_0$$.\n\nWe observe a sequence of state-observation-action triples:\n\n$$\n(s_0,o_0,a_0), (s_1,o_1,a_1), \\ldots, (s_n,o_n,a_n)\n$$\n\nThe index for the final timestep is at most the time horzion: $$n \\leq N$$. The joint posterior on the agent's utilities and noise given the observed sequence is:\n\n$$\nP(U,\\alpha, b_0 | (s,o,a)_{0:n}) \\propto P( (s,o,a)_{0:n} | U, \\alpha, b_0)P(U, \\alpha, b_0)\n$$\n\nTo produce a factorized form of this posterior analogous to Equation (1), we compute the sequence of agent beliefs. This is given by the recursive Bayesian belief update described in [Chapter 3.3](/chapters/3c-pomdp):\n\n$$\nb_i = b_{i-1} \\vert s_i, o_i, a_{i-1}\n$$\n\n$$\nb_i(s_i) \\propto\nO(s_i,a_{i-1},o_i)\n\\sum_{s_i \\in S} { T(s_{i-1}, a_{i-1}, s_i) b_{i-1}(s_{i-1})}\n$$\n\nThe posterior can thus be written as **Equation (2)**: <a id=\></a>\n\n$$\nP(U, \\alpha, b_0 | (s,o,a)_{0:n}) \\propto P(U, \\alpha, b_0) \\prod_{i=0}^n P( a_i | s_i, b_i, U, \\alpha)\n$$\n\n\n### Application: Bandits\n\nTo learn the preferences and beliefs of a POMDP agent we translate Equation (2) into WebPPL. In a later [chapter](/chapters/5e-joint-inference.html), we apply this to the Restaurant Choice problem. Here we focus on the Bandit problems introduced in the [previous chapter](/chapters/3c-pomdp).\n\nIn the Bandit problems there is an unknown mapping from arms to non-numeric prizes (or distributions on such prizes) and the agent has preferences over these prizes. The agent tries out arms to discover the mapping and exploits the most promising arms. In the *inverse* problem, we get to observe the agent's actions. Unlike the agent, we already know the mapping from arms to prizes. However, we don't know the agent's preferences or the agent's prior about the mapping[^bandit].\n\n[^bandit]: If we did not know the mapping from arms to prizes, the inference problem would not change fundamentally. We get information about this mapping by observing the prizes the agent receives when pulling different arms.\n\nOften the agent's choices admit of multiple explanations. Recall the deterministic example in the previous chapter when (according to the agent's belief) `arm0` had the prize \ and `arm1` had either \ or \ (see also Figure 2 below). Suppose we observe the agent chosing `arm0` on the first of five trials. If we don't know the agent's utilities or beliefs, then this choice could be explained by either:\n\n(1). the agent's preference for chocolate over champagne, or\n\n(2). the agent's belief that `arm1` is very likely (e.g. 95%) to yield the \ prize deterministically\n\nGiven this choice by the agent, we won't be able to identify which of (1) and (2) is true because exploration becomes less valuable every trial (and there's only 5 trials total).\n\nThe codeboxes below implements this example. The translation of Equation (2) is in the function `factorSequence`. This function iterates through the observed state-observation-action triples, updating the agent's belief at each timestep. It interleaves conditioning on an action (via `factor`) with computing the sequence of belief functions $$b_i$$. The variable names correspond as follows:\n\n- $$b_0$$ is `initialBelief` (an argument to `factorSequence`)\n\n- $$s_i$$ is `state`\n\n- $$b_i$$ is `nextBelief`\n\n- $$a_i$$ is `observedAction`\n\n~~~~\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n~~~~\n\nWe start with a very simple example. The agent is observed pulling `arm1` five times. The agent's prior is known and assigns equal weight to `arm1` yielding \ and to it yielding \. The true prize for `arm1` is \ (see Figure 1).\n\n<img src=\ alt=\ style=\/>\n\n> **Figure 1:** Bandit problem where agent's prior is known. (The true state has the bold outline).\n\nFrom the observation, it's obvious that the agent prefers champagne. This is what we infer below:\n\n~~~~\n///fold: inferBeliefsAndPreferences, getMarginal\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n\nvar getMarginal = function(dist, key){\n return Infer({ model() {\n return sample(dist)[key];\n }});\n};\n///\n// true prizes for arms\nvar trueArmToPrizeDist = {\n 0: Delta({ v: 'chocolate' }),\n 1: Delta({ v: 'champagne' })\n};\nvar bandit = makeBanditPOMDP({\n armToPrizeDist: trueArmToPrizeDist,\n numberOfArms: 2,\n numberOfTrials: 5\n});\n\n// simpleAgent always pulls arm 1\nvar simpleAgent = makePOMDPAgent({\n act: function(belief){\n return Infer({ model() { return 1; }});\n },\n updateBelief: function(belief){ return belief; },\n params: { priorBelief: Delta({ v: bandit.startState }) }\n}, bandit.world);\n\nvar observedSequence = simulatePOMDP(bandit.startState, bandit.world, simpleAgent,\n 'stateObservationAction');\n\n// Priors for inference\n\n// We know agent's prior, which is that either arm1 yields\n// nothing or it yields champagne.\nvar priorInitialBelief = Delta({ v: Infer({ model() {\n var armToPrizeDist = uniformDraw([\n trueArmToPrizeDist,\n extend(trueArmToPrizeDist, { 1: Delta({ v: 'nothing' }) })]);\n return makeBanditStartState(5, armToPrizeDist);\n}})});\n\n// Agent either prefers chocolate or champagne.\nvar likesChampagne = {\n nothing: 0,\n champagne: 5,\n chocolate: 3\n};\nvar likesChocolate = {\n nothing: 0,\n champagne: 3,\n chocolate: 5\n};\nvar priorPrizeToUtility = Categorical({\n vs: [likesChampagne, likesChocolate],\n ps: [0.5, 0.5]\n});\nvar baseParams = { alpha: 1000 };\nvar posterior = inferBeliefsAndPreferences(baseParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence);\n\nprint(\"After observing agent choose arm1, what are agent's utilities?\informed\champagne\nothing\/assets/img/4-irl-bandit-2.png\diagram\width: 600px;\champagne\nothing\, : , : [, , , ], : [], : } |
| {: , : , : , : , : , : , : , : [, , , ], : [], : } |
| {: , : , : , : , : , : Time inconsistency II\/chapters/3a-mdp.html#recursion\the softmax action the agent would take in state $$s'$$ given that their rewards occur with a delay $$d+1$$\".\n\nThe Naive agent simulates his future actions by computing $$C(s';d+1)$$; the Sophisticated agent computes the action that will *actually* occur, which is $$C(s';0)$$. So if we want to simulate an environment including a hyperbolic discounter, we can compute the agent's action with $$C(s;0)$$ for every state $$s$$. \n\n\n### Implementing the hyperbolic discounter\n \nAs with the MDP and POMDP agents, our WebPPL implementation directly translates the mathematical formulation of Naive and Sophisticated hyperbolic discounting. The variable names correspond as follows:\n\n- The function $$\\delta$$ is named `discountFunction`\n\n- The \, which controls how the agent's simulated future self evaluates rewards, is $$d$$ in the math and `perceivedDelay` below. \n\n- $$s'$$, $$a'$$, $$d+1$$ correspond to `nextState`, `nextAction` and `delay+1` respectively. \n\nThis codebox simplifies the code for the hyperbolic discounter by omitting definitions of `transition`, `utility` and so on:\n\n~~~~\nvar makeAgent = function(params, world) {\n\n var act = dp.cache( \n function(state, delay){\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay); \n factor(params.alpha * eu);\n return action;\n }}); \n });\n \n var expectedUtility = dp.cache(\n function(state, action, delay){\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else { \n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1); \n }}));\n } \n });\n \n return { params, expectedUtility, act };\n};\n~~~~\n\nThe next codebox shows how the Naive agent can end up at Donut North in the Restaurant Choice problem, despite this being dominated for any possible utility function. The Naive agent first moves in the direction of Veg, which initially looks better than Donut South. When right outside Donut North, discounting makes it look better than Veg. To visualize this, we display the agent's expected utility calculations at different steps along its trajectory. The crucial values are the `expectedValue` of going left at [3,5] when `delay=0` compared with `delay=4`. The function `plannedTrajectories` uses `expectedValue` to access these values. For each timestep, we plot the agent's position and the expected utility of each action they might perform in the future. \n\n<!-- simulate_hyperbolic_agent -->\n~~~~\n///fold: makeAgent, mdp, plannedTrajectories\nvar makeAgent = function(params, world) {\n var defaultParams = {\n alpha: 500, \n discount: 1\n };\n var params = extend(defaultParams, params);\n var stateToActions = world.stateToActions;\n var transition = world.transition;\n var utility = params.utility;\n var paramsDiscountFunction = params.discountFunction;\n\n var discountFunction = (\n paramsDiscountFunction ? \n paramsDiscountFunction : \n function(delay){ return 1/(1 + params.discount*delay); });\n\n var isNaive = params.sophisticatedOrNaive === 'naive';\n\n var act = dp.cache( \n function(state, delay) {\n var delay = delay || 0; // make sure delay is never undefined\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay);\n factor(params.alpha * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action, delay) {\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1);\n }}));\n }\n });\n\n return { params, expectedUtility, act };\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar MAPActionPath = function(state, world, agent, actualTotalTime, statesOrActions) { \n var perceivedTotalTime = state.timeLeft;\n assert.ok(perceivedTotalTime > 1 || state.terminateAfterAction==false,\n 'perceivedTime<=1. If=1 then should have state.terminateAfterAction,' +\n ' but then simulate wont work ' + JSON.stringify(state));\n\n var agentAction = agent.act;\n var expectedUtility = agent.expectedUtility;\n var transition = world.transition;\n\n var sampleSequence = function (state, actualTimeLeft) {\n var action = agentAction(state, actualTotalTime-actualTimeLeft).MAP().val;\n var nextState = transition(state, action); \n var out = {states:state, actions:action, both:[state,action]}[statesOrActions];\n if (actualTimeLeft==0 || state.terminateAfterAction){\n return [out];\n } else {\n return [ out ].concat( sampleSequence(nextState, actualTimeLeft-1));\n }\n };\n return sampleSequence(state, actualTotalTime);\n};\n\nvar plannedTrajectory = function(world, agent) {\n var getExpectedUtilities = function(trajectory, agent, actions) { \n var expectedUtility = agent.expectedUtility;\n var v = mapIndexed(function(i, state) {\n return [state, map(function (a) { return expectedUtility(state, a, i); }, actions)];\n }, trajectory );\n return v;\n };\n return function(state) {\n var currentPlan = MAPActionPath(state, world, agent, state.timeLeft, 'states');\n return getExpectedUtilities(currentPlan, agent, world.actions);\n };\n} \n\nvar plannedTrajectories = function(trajectory, world, agent) { \n var getTrajectory = plannedTrajectory(world, agent);\n return map(getTrajectory, trajectory);\n}\n///\n\nvar world = mdp.world;\nvar start = mdp.startState;\n\nvar utilityTable = {\n 'Donut N': [10, -10], // [immediate reward, delayed reward]\n 'Donut S': [10, -10],\n 'Veg': [-10, 20],\n 'Noodle': [0, 0],\n 'timeCost': -.01 // cost of taking a single action \n};\n\nvar restaurantUtility = function(state, action) {\n var feature = world.feature;\n var name = feature(state).name;\n if (name) {\n return utilityTable[name][state.timeAtRestaurant]\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar runAndGraph = function(agent) { \n var trajectory = simulateMDP(mdp.startState, world, agent);\n var plans = plannedTrajectories(trajectory, world, agent);\n viz.gridworld(world, {\n trajectory, \n dynamicActionExpectedUtilities: plans\n });\n};\n\nvar agent = makeAgent({\n sophisticatedOrNaive: 'naive', \n utility: restaurantUtility\n}, world);\n\nprint('Naive agent: \\n\\n');\nrunAndGraph(agent);\n~~~~\n\nWe run the Sophisticated agent with the same parameters and visualization. \n\n<!-- simulate_hyperbolic_agent_sophisticated -->\n~~~~\n///fold: \nvar makeAgent = function(params, world) {\n var defaultParams = {\n alpha: 500, \n discount: 1\n };\n var params = extend(defaultParams, params);\n var stateToActions = world.stateToActions;\n var transition = world.transition;\n var utility = params.utility;\n var paramsDiscountFunction = params.discountFunction;\n\n var discountFunction = (\n paramsDiscountFunction ? \n paramsDiscountFunction : \n function(delay){ return 1/(1+params.discount*delay); });\n\n var isNaive = params.sophisticatedOrNaive === 'naive';\n\n var act = dp.cache( \n function(state, delay) {\n var delay = delay || 0; // make sure delay is never undefined\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay);\n factor(params.alpha * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action, delay) {\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1);\n }}));\n }\n });\n\n return { params, expectedUtility, act };\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar world = mdp.world;\n\nvar utilityTable = {\n 'Donut N': [10, -10], // [immediate reward, delayed reward]\n 'Donut S': [10, -10],\n 'Veg': [-10, 20],\n 'Noodle': [0, 0],\n 'timeCost': -.01 // cost of taking a single action \n};\n\nvar restaurantUtility = function(state, action) {\n var feature = world.feature;\n var name = feature(state).name;\n if (name) {\n return utilityTable[name][state.timeAtRestaurant]\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar MAPActionPath = function(state, world, agent, actualTotalTime, statesOrActions) { \n var perceivedTotalTime = state.timeLeft;\n assert.ok(perceivedTotalTime > 1 || state.terminateAfterAction==false,\n 'perceivedTime<=1. If=1 then should have state.terminateAfterAction,' +\n ' but then simulate wont work ' + JSON.stringify(state));\n\n var agentAction = agent.act;\n var expectedUtility = agent.expectedUtility;\n var transition = world.transition;\n\n var sampleSequence = function (state, actualTimeLeft) {\n var action = agentAction(state, actualTotalTime-actualTimeLeft).MAP().val;\n var nextState = transition(state, action); \n var out = {states:state, actions:action, both:[state,action]}[statesOrActions];\n if (actualTimeLeft==0 || state.terminateAfterAction){\n return [out];\n } else {\n return [ out ].concat( sampleSequence(nextState, actualTimeLeft-1));\n }\n };\n return sampleSequence(state, actualTotalTime);\n};\n\nvar plannedTrajectory = function(world, agent) {\n var getExpectedUtilities = function(trajectory, agent, actions) { \n var expectedUtility = agent.expectedUtility;\n var v = mapIndexed(function(i, state) {\n return [state, map(function (a) { return expectedUtility(state, a, i); }, actions)];\n }, trajectory );\n return v;\n };\n return function(state) {\n var currentPlan = MAPActionPath(state, world, agent, state.timeLeft, 'states');\n return getExpectedUtilities(currentPlan, agent, world.actions);\n };\n};\n\nvar plannedTrajectories = function(trajectory, world, agent) { \n var getTrajectory = plannedTrajectory(world, agent);\n return map(getTrajectory, trajectory);\n};\n\nvar runAndGraph = function(agent) { \n var trajectory = simulateMDP(mdp.startState, world, agent);\n var plans = plannedTrajectories(trajectory, world, agent);\n viz.gridworld(world, {\n trajectory, \n dynamicActionExpectedUtilities: plans\n });\n};\n///\n\nvar agent = makeAgent({\n sophisticatedOrNaive: 'sophisticated', \n utility: restaurantUtility\n}, world);\n\nprint('Sophisticated agent: \\n\\n');\nrunAndGraph(agent);\n~~~~\n\n>**Exercise**: What would an exponential discounter with identical preferences to the agents above do on the Restaurant Choice problem? Implement an exponential discounter in the codebox above by adding a `discountFunction` property to the `params` argument to `makeAgent`. \n<br>\n\n--------\n\n<a id='procrastination'></a>\n\n### Example: Procrastinating on a task\n\nCompared to the Restaurant Choice problem, procrastination leads to (systematically biased) behavior that is especially hard to explain on the softmax noise mode.\n\n> **The Procrastination Problem**\n> <br>You have a hard deadline of ten days to complete a task (e.g. write a paper for class, apply for a school or job). Completing the task takes a full day and has a *cost* (it's unpleasant work). After the task is complete you get a *reward* (typically exceeding the cost). There is an incentive to finish early: every day you delay finishing, your reward gets slightly smaller. (Imagine that it's good for your reputation to complete tasks early or that early applicants are considered first).\n\nNote that if the task is worth doing at the last minute, then you should do it immediately (because the reward diminishes over time). Yet people often do this kind of task at the last minute -- the worst possible time to do it!\n\nHyperbolic discounting provides an elegant model of this behavior. On Day 1, a hyperbolic discounter will prefer that they complete the task tomorrow rather than today. Moreover, a Naive agent wrongly predicts they will complete the task tomorrow and so puts off the task till Day 2. When Day 2 arrives, the Naive agent reasons in the same way -- telling themself that they can avoid the work today by putting it off till tomorrow. This continues until the last possible day, when the Naive agent finally completes the task.\n\nIn this problem, the behavior of optimal and time-inconsistent agents with identical preferences (i.e. utility functions) diverges. If the deadline is $$T$$ days from the start, the optimal agent will do the task immediately and the Naive agent will do the task on Day $$T$$. Any problem where a time-inconsistent agent receives exponentially lower reward than an optimal agent contains a close variant of our Procrastination Problem refp:kleinberg2014time [^kleinberg]. \n\n[^kleinberg]: Kleinberg and Oren's paper considers a variant problem where the each cost/penalty for waiting is received immediately (rather than being delayed until the time the task is done). In this variant, the agent must eventually complete the task. The authors consider \ time-inconsistent agents, i.e. agents who do not discount their next reward, but discount all future rewards by $$\\beta < 1$$. They show that in any problem where a semi-myopic agent receives exponentially lower reward than an optimal agent, the problem must contain a copy of their variant of the Procrastination Problem.\n\nWe formalize the Procrastination Problem in terms of a deterministic graph. Suppose the **deadline** is $$T$$ steps from the start. Assume that after $$t$$ < $$T$$ steps the agent has not yet completed the task. Then the agent can take the action `\` (which has **work cost** $$-w$$) or the action `\` with zero cost. After the `\` action the agent transitions to the `\` state and receives $$+(R - t \\epsilon)$$, where $$R$$ is the **reward** for the task and $$\\epsilon$$ is how much the reward diminishes for every day of waiting (the **wait cost**). See Figure 3 below. \n\n<img src=\ alt=\ style=\/>\n\n>**Figure 3:** Transition graph for Procrastination Problem. States are represented by nodes. Edges are state-transitions and are labeled with the action name and the utility of the state-action pair. Terminal nodes have a bold border and their utility is labeled below.\n\nWe simulate the behavior of hyperbolic discounters on the Procrastination Problem. We vary the discount rate $$k$$ while holding the other parameters fixed. The agent's behavior can be summarized by its final state (`\"wait_state\"` or `\"reward_state`) and by how much time elapses before termination. When $$k$$ is sufficiently high, the agent will not even complete the task on the last day. \n\n<!-- procrastinate -->\n~~~~\n///fold: makeProcrastinationMDP, makeProcrastinationUtility\nvar makeProcrastinationMDP = function(deadlineTime) {\n var stateLocs = [\"wait_state\", \"reward_state\"];\n var actions = [\"wait\", \"work\", \"relax\"];\n\n var stateToActions = function(state) {\n return (state.loc === \"wait_state\" ? \n [\"wait\", \"work\"] :\n [\"relax\"]);\n };\n\n var advanceTime = function (state) {\n var newTimeLeft = state.timeLeft - 1;\n var terminateAfterAction = (newTimeLeft === 1 || \n state.loc === \"reward_state\");\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: terminateAfterAction\n });\n };\n\n var transition = function(state, action) {\n assert.ok(_.includes(stateLocs, state.loc) && _.includes(actions, action), \n 'procrastinate transition:' + [state.loc,action]);\n \n if (state.loc === \"reward_state\") {\n return advanceTime(state);\n } else if (action === \"wait\") {\n var waitSteps = state.waitSteps + 1;\n return extend(advanceTime(state), { waitSteps });\n } else {\n var newState = extend(state, { loc: \"reward_state\" });\n return advanceTime(newState);\n }\n };\n\n var feature = function(state) {\n return state.loc;\n };\n\n var startState = {\n loc: \"wait_state\",\n waitSteps: 0,\n timeLeft: deadlineTime,\n terminateAfterAction: false\n };\n\n return {\n actions,\n stateToActions,\n transition,\n feature,\n startState\n };\n};\n\n\nvar makeProcrastinationUtility = function(utilityTable) {\n assert.ok(hasProperties(utilityTable, ['waitCost', 'workCost', 'reward']),\n 'makeProcrastinationUtility args');\n var waitCost = utilityTable.waitCost;\n var workCost = utilityTable.workCost;\n var reward = utilityTable.reward;\n\n // NB: you receive the *workCost* when you leave the *wait_state*\n // You then receive the reward when leaving the *reward_state* state\n return function(state, action) {\n if (state.loc === \"reward_state\") {\n return reward + state.waitSteps * waitCost;\n } else if (action === \"work\") {\n return workCost;\n } else {\n return 0;\n }\n };\n};\n///\n\n// Construct Procrastinate world \nvar deadline = 10;\nvar world = makeProcrastinationMDP(deadline);\n\n// Agent params\nvar utilityTable = {\n reward: 4.5,\n waitCost: -0.1,\n workCost: -1\n};\n\nvar params = {\n utility: makeProcrastinationUtility(utilityTable),\n alpha: 1000,\n discount: null,\n sophisticatedOrNaive: 'sophisticated'\n};\n\nvar getLastState = function(discount){\n var agent = makeMDPAgent(extend(params, { discount: discount }), world);\n var states = simulateMDP(world.startState, world, agent, 'states');\n return [last(states).loc, states.length];\n};\n\nmap(function(discount) {\n var lastState = getLastState(discount);\n print('Discount: ' + discount + '. Last state: ' + lastState[0] +\n '. Time: ' + lastState[1] + '\\n')\n}, _.range(8));\n~~~~\n\n\n>**Exercise:**\n\n> 1. Explain how an exponential discounter would behave on this task. Assume their utilities are the same as above and consider different discount rates.\n> 2. Run the codebox above with a Sophisticated agent. Explain the results. \n\n\nNext chapter: [Myopia for rewards and belief updates](/chapters/5c-myopic.html)\n\n<br>\n\n### Footnotes\n", "date_published": "2019-08-28T09:06:53Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5b-time-inconsistency.md"} |
| {"id": "9f19984716deb794d2bfcdf32a08282a", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5e-joint-inference.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Joint inference of biases and preferences II\ndescription: Explaining temptation and pre-commitment using softmax noise and hyperbolic discounting.\n\n---\n\n## Restaurant Choice: Time-inconsistent vs. optimal MDP agents\n\nReturning to the MDP Restaurant Choice problem, we compare a model that assumes an optimal, non-discounting MDP agent to a model that includes both time-inconsistent and optimal agents. We also consider models that expand the set of preferences the agent can have.\n\n<!-- 2. In the POMDP setting (where the restaurants may be open or closed and the agent can learn this from observation), we do joint inference over preferences, beliefs and discounting behavior. We show that our inference approach can produce multiple explanations for the same behavior and that explanations in terms of beliefs and preferences are more plausible than those involving time-inconsistency.\n\nAs we discussed in Chapter V.1, time-inconsistent agents can produce trajectories on the MDP (full knowledge) version of this scenario that never occur for an optimal agent without noise.\n\nIn our first inference example, we do joint inference over preferences, softmax noise and the discounting behavior of the agent. (We assume for this example that the agent has full knowledge and is not Myopic). We compare the preference inferences [that allow for possibility of time inconsistency] to the earlier inference approach that assumes optimality.\n-->\n\n### Assume discounting, infer \"Naive\" or \"Sophisticated\"\n\nBefore making a direct comparison, we demonstrate that we can infer the preferences of time-inconsistent agents from observations of their behavior.\n\nFirst we condition on the path where the agent moves to Donut North. We call this the Naive path because it is distinctive to the Naive hyperbolic discounter (who is tempted by Donut North on the way to Veg):\n\n<!-- draw_naive_path -->\n~~~~\n///fold: restaurant choice MDP, naiveTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: naiveTrajectory });\n~~~~\n\nFor inference, we specialize the approach in the previous <a href=\"/chapters/5d-joint-inference.html#formalization\">chapter</a> for agents in MDPs that are potentially time inconsistent. So we infer $$\\nu$$ and $$k$$ (the hyperbolic discounting parameters) but not the initial belief state $$b_0$$. The function `exampleGetPosterior` is a slightly simplified version of the library function we use below.\n\n<!-- getPosterior_function -->\n~~~~\nvar exampleGetPosterior = function(mdp, prior, observedStateAction){\n var world = mdp.world;\n var makeUtilityFunction = mdp.makeUtilityFunction;\n return Infer({ model() {\n\n // Sample parameters from prior\n var priorUtility = prior.priorUtility;\n var utilityTable = priorUtility();\n var priorDiscounting = prior.discounting\n var sophisticatedOrNaive = priorDiscounting().sophisticatedOrNaive;\n\n var priorAlpha = prior.priorAlpha;\n\n // Create agent with those parameters\n var agent = makeMDPAgent({\n utility: makeUtilityFunction(utilityTable),\n alpha: priorAlpha(),\n discount: priorDiscounting().discount,\n sophisticatedOrNaive : sophisticatedOrNaive\n }, world);\n\n var agentAction = agent.act;\n\n // Condition on observed actions\n map(function(stateAction) {\n var state = stateAction[0];\n var action = stateAction[1];\n observe(agentAction(state, 0), action);\n }, observedStateAction);\n\n // return parameters and summary statistics\n var vegMinusDonut = sum(utilityTable['Veg']) - sum(utilityTable['Donut N']);\n\n return {\n utility: utilityTable,\n sophisticatedOrNaive: discounting.sophisticatedOrNaive,\n discount: discounting.discount,\n alpha,\n vegMinusDonut,\n };\n }});\n};\n~~~~\n\nThis inference function allows for inference over the softmax parameter ($$\\alpha$$ or `alpha`) and the discount constant ($$k$$ or `discount`). For this example, we fix these values so that the agent has low noise ($$\\alpha=1000$$) and so $$k=1$$. We also fix the `timeCost` utility to be small and negative and Noodle's utility to be negative. We infer only the agent's utilities and whether they are Naive or Sophisticated.\n\n<!-- infer_assume_discounting_naive -->\n~~~~\n///fold: Call to hyperbolic library function and helper display function\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x){\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x){\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPosteriorDataTable,\n sophisticationPriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({donutTempting: x}),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n///\n\n// Prior on agent's utility function: each restaurant has an\n// *immediate* utility and a *delayed* utility (which is received after a\n// delay of 1).\nvar priorUtility = function(){\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function(){\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive', 'sophisticated'])\n };\n};\nvar priorAlpha = function(){ return 1000; };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, naiveTrajectory);\n\n// To get the prior, we condition on the empty list of observations\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nWe display maximum values and marginal distributions for both the prior and the posterior conditioned on the path shown above. To compute the prior, we simply condition on the empty list of observations.\n\nThe first graph shows the distribution over whether the agent is Sophisticated or Naive (labeled `sophisticatedOrNaive`). For the other graphs, we compute summary statistics of the agent's parameters and display the distribution over them. The variable `vegMinusDonut` is the difference in *total* utility between Veg and Donut, ignoring the fact that each restaurant has an *immediate* and *delayed* utility. Inference rules out cases where the total utility is equal (which is most likely in the prior), since the agent would simply go to Donut South in that case. Finally, we introduce a variable `donutTempting`, which is true if the agent prefers Veg to Donut North at the start but reverses this preference when adjacent to Donut North. The prior probability of `donutTempting` is less than $$0.1$$, since it depends on relatively delicate balance of utilities and the discounting behavior. The posterior is closer to $$0.9$$, suggesting (along with the posterior on `sophisticatedOrNaive`) that this is the explanation of the data favored by the model.\n\n--------\n\nUsing the same prior, we condition on the \"Sophisticated\" path (i.e. the path distinctive to the Sophisticated agent who avoids the temptation of Donut North and takes the long route to Veg):\n\n<!-- draw_sophisticated_path -->\n~~~~\n///fold:\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: sophisticatedTrajectory });\n~~~~\n\nHere are the results of inference:\n\n<!-- infer_assume_discounting_sophisticated -->\n~~~~\n///fold: Definition of world, prior and inference function is same as above codebox\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPriorDataTable,\n sophisticationPosteriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function(){\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){ return 1000; };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar sophisticatedTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[4,3],\:false,\:8,\:[3,3]},\],\n [{\:[5,3],\:false,\:7,\:[4,3]},\],\n [{\:[5,4],\:false,\:6,\:[5,3]},\],\n [{\:[5,5],\:false,\:5,\:[5,4]},\],\n [{\:[5,6],\:false,\:4,\:[5,5]},\],\n [{\:[4,6],\:false,\:3,\:[5,6]},\],\n [{\:[4,7],\:false,\:2,\:[4,6],\:0},\],\n [{\:[4,7],\:true,\:2,\:[4,7],\:1},\]\n];\n///\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nIf the agent goes directly to Veg, then they don't provide information about whether they are Naive or Sophisticated. Using the same prior again, we do inference on this path:\n\n<!-- draw_vegDirect_path -->\n~~~~\n///fold:\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar vegDirectTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"u\"],\n [{\"loc\":[3,6],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5]},\"r\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[3,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":4,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: vegDirectTrajectory });\n~~~~\n\nHere are the results of inference:\n\n<!-- infer_assume_discount_vegDirect -->\n~~~~\n// Definition of world, prior and inference function is same as above codebox\n\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(function(x) {\n return {sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'};\n }, ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(function(x) {\n return {sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'};\n }, ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPriorDataTable,\n sophisticationPosteriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n }, [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(function(x){\n return {vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'};\n }, [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, {groupBy: 'distribution'});\n\n\n var donutTemptingPriorDataTable = map(function(x){\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n }, [true, false]);\n\n var donutTemptingPosteriorDataTable = map(function(x){\n return {\n donutTempting: x,\n probability: getPosteriorProb({donutTempting: x}),\n distribution: 'posterior'\n };\n }, [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){return 1000;};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar vegDirectTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[3,4],\:false,\:8,\:[3,3]},\],\n [{\:[3,5],\:false,\:7,\:[3,4]},\],\n [{\:[3,6],\:false,\:6,\:[3,5]},\],\n [{\:[4,6],\:false,\:5,\:[3,6]},\],\n [{\:[4,7],\:false,\:4,\:[4,6],\:0},\],\n [{\:[4,7],\:true,\:4,\:[4,7],\:1},\]\n];\n///\n\nvar posterior = getPosterior(mdp.world, prior, vegDirectTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\n<br>\n\n---------\n\n### Assume non-discounting, infer preferences and softmax\n\nWe want to compare a model that assumes an optimal MDP agent with one that allows for time-inconsistency. We first show the inferences by the model that assumes optimality. This model can only explain the anomalous Naive and Sophisticated paths in terms of softmax noise (lower values for $$\\alpha$$). We display the prior and posteriors for both the Naive and Sophisticated paths.\n\n<!-- infer_assume_optimal_naive_sophisticated -->\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var vegMinusDonutPriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({ alpha: x }),\n distribution: 'prior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({ alpha: x }),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[3,4],\:false,\:8,\:[3,3]},\],\n [{\:[3,5],\:false,\:7,\:[3,4]},\],\n [{\:[2,5],\:false,\:6,\:[3,5],\:0},\],\n [{\:[2,5],\:true,\:6,\:[2,5],\:1},\]\n];\n\nvar sophisticatedTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[4,3],\:false,\:8,\:[3,3]},\],\n [{\:[5,3],\:false,\:7,\:[4,3]},\],\n [{\:[5,4],\:false,\:6,\:[5,3]},\],\n [{\:[5,5],\:false,\:5,\:[5,4]},\],\n [{\:[5,6],\:false,\:4,\:[5,5]},\],\n [{\:[4,6],\:false,\:3,\:[5,6]},\],\n [{\:[4,7],\:false,\:2,\:[4,6],\:0},\],\n [{\:[4,7],\:true,\:2,\:[4,7],\:1},\]\n];\n///\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30, 40];\n // with no discounting, delayed utilities are ommitted\n var donut = [uniformDraw(utilityValues), 0];\n var veg = [uniformDraw(utilityValues), 0];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\n// We assume no discounting (so *sophisticated* has no effect here)\nvar priorDiscounting = function() {\n return {\n discount: 0,\n sophisticatedOrNaive: 'sophisticated'\n };\n};\n\nvar priorAlpha = function(){ return uniformDraw([0.1, 10, 100, 1000]); };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar world = mdp.world;\n\nprint('Prior and posterior after observing Naive path');\nvar posteriorNaive = getPosterior(world, prior, naiveTrajectory);\ndisplayResults(getPosterior(world, prior, []), posteriorNaive);\n\nprint('Prior and posterior after observing Sophisticated path');\nvar posteriorSophisticated = getPosterior(world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(world, prior, []), posteriorSophisticated);\n~~~~\n\nThe graphs show two important results:\n\n1. For the Naive path, the agent is inferred to prefer Donut, while for the Sophisticated path, Veg is inferred. In both cases, the inference fits with where the agent ends up.\n\n2. High values for $$\\alpha$$ are ruled out in each case, showing that the model explains the behavior in terms of noise.\n\nWhat happens if we observe the agent taking the Naive path *repeatedly*? While noise is needed to explain the agent's path, too much noise is inconsistent with taking an identical path repeatedly. This is confirmed in the results below:\n\n<!-- infer_assume_optimal_naive_three_times -->\n~~~~\n///fold: Prior is same as above\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30, 40];\n // with no discounting, delayed utilities are ommitted\n var donut = [uniformDraw(utilityValues), 0];\n var veg = [uniformDraw(utilityValues), 0];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\n// We assume no discounting (so *sophisticated* has no effect here)\nvar priorDiscounting = function(){\n return {\n discount: 0,\n sophisticatedOrNaive: 'sophisticated'\n };\n};\n\nvar priorAlpha = function(){\n return uniformDraw([0.1, 10, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar numberRepeats = 2; // with 2 repeats, we condition a total of 3 times\nvar posteriorNaive = getPosterior(mdp.world, prior, naiveTrajectory, numberRepeats);\nprint('Prior and posterior after conditioning 3 times on Naive path');\ndisplayResults(getPosterior(mdp.world, prior, []), posteriorNaive);\n~~~~\n\n<br>\n\n--------\n\n### Model that includes discounting: jointly infer discounting, preferences, softmax noise\n\nOur inference model now has the optimal agent as a special case but also includes time-inconsistent agents. This model jointly infers the discounting behavior, the agent's utilities and the softmax noise.\n\nWe show two different posteriors. The first is after conditioning on the Naive path (as above). In the second, we imagine that we have observed the agent taking the same path on multiple occasions (three times) and we condition on this.\n\n<!-- infer_joint_model_naive -->\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({ sophisticatedOrNaive: x }),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({ sophisticatedOrNaive: x }),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPosteriorDataTable,\n sophisticationPriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-10, 0, 10, 20, 30, 40, 50, 60, 70]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-10, 0, 10, 20, 30, 40, 50, 60, 70]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x){\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 10, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({ alpha: x }),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\nvar naiveTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[3,4],\:false,\:8,\:[3,3]},\],\n [{\:[3,5],\:false,\:7,\:[3,4]},\],\n [{\:[2,5],\:false,\:6,\:[3,5],\:0},\],\n [{\:[2,5],\:true,\:6,\:[2,5],\:1},\]\n];\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n// Prior on agent's utility function. We fix the delayed utilities\n// to make inference faster\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30];\n var donut = [uniformDraw(utilityValues), -10];\n var veg = [uniformDraw(utilityValues), 20];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([.1, 10, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar world = mdp.world;\n\nvar posterior = getPosterior(world, prior, naiveTrajectory);\nprint('Prior and posterior after observing Naive path');\ndisplayResults(getPosterior(world, prior, []), posterior);\n\nprint('Prior and posterior after observing Naive path three times');\nvar numberRepeats = 2;\ndisplayResults(getPosterior(world, prior, []),\n getPosterior(world, prior, naiveTrajectory, numberRepeats));\n~~~~\n\nConditioning on the Naive path once, the probabilities of the agent being Naive and of `donutTempting` both go up. However, the probability of high softmax noise also goes up. In terms of preferences, we rule out a strong preference for Veg and slightly reduce a preference for Donut. So if the agent were Naive, tempted by Donut and with very low noise, our inference would not place most of the posterior on this explanation. There are two reasons for this. First, this agent is unlikely in the prior. Second, the explanation of the behavior in terms of noise is plausible. (In our Gridworld setup, we don't allow the agent to backtrack to the previous state. This means there are few cases where a softmax noisy agent would behavior differently than a low noise one.). Conditioning on the same Naive path three times makes the explanation in terms of noise much less plausible: the agent would makes the same \ three times and makes no other mistakes. (The results for the Sophisticated path are similar.)\n\nIn summary, if we observe the agent repeatedly take the Naive path, the \ explains this in terms of a preference for Donut and significant softmax noise (explaining why the agent takes Donut North over Donut South). The \ is similar to the Optimal Model when it observes the Naive path *once*. However, observing it multiple times, it infers that the agent has low noise and an overall preference for Veg.\n\n<br>\n\n------\n\n### Preferences for the two Donut Store branches can vary\n\nAnother explanation of the Naive path is that the agent has a preference for the \ branch of the Donut Store over the \ branch. Maybe this branch is better run or has more space. If we add this to our set of possible preferences, inference changes significantly.\n\nTo speed up inference, we use a fixed assumption that the agent is Naive. There are three explanations of the agent's path:\n\n1. Softmax noise: measured by $$\\alpha$$\n2. The agent is Naive and tempted by Donut: measured by `discount` and `donutTempting`\n3. The agent prefers Donut N to Donut S: measured by `donutNGreaterDonutS` (i.e. Donut N's utility is greater than Donut S's).\n\nThese three can also be combined to explain the behavior.\n\n<!-- TODO fix infer_joint_two_donut_naive -->\n\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var alphaPriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var discountPriorDataTable = map(\n function(x) {\n return {\n discount: x,\n probability: getPriorProb({ discount: x }),\n distribution: 'prior'\n };\n },\n [0, 1]);\n\n var discountPosteriorDataTable = map(\n function(x) {\n return {\n discount: x,\n probability: getPosteriorProb({ discount: x }),\n distribution: 'posterior'\n };\n },\n [0, 1]);\n\n var discountDataTable = append(discountPriorDataTable,\n discountPosteriorDataTable);\n\n viz.bar(discountDataTable, { groupBy: 'distribution' });\n\n var donutNvsSPriorDataTable = map(\n function(x) {\n return {\n donutNGreaterDonutS: x,\n probability: getPriorProb({ donutNGreaterDonutS: x }),\n distribution: 'prior'\n };\n },\n [false, true]);\n\n var donutNvsSPosteriorDataTable = map(\n function(x) {\n return {\n donutNGreaterDonutS: x,\n probability: getPosteriorProb({ donutNGreaterDonutS: x }),\n distribution: 'posterior'\n };\n },\n [false, true]);\n\n var donutNvsSDataTable = append(donutNvsSPriorDataTable,\n donutNvsSPosteriorDataTable);\n\n viz.bar(donutNvsSDataTable, { groupBy: 'distribution' });\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n return {\n 'Donut N': [uniformDraw(utilityValues), -10],\n 'Donut S': [uniformDraw(utilityValues), -10],\n 'Veg': [20, uniformDraw(utilityValues)],\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: 'naive'\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([.1, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, naiveTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nThe explanation in terms of Donut North being preferred does well in the posterior. This is because the discounting explanation (even assuming the agent is Naive) is unlikely a priori (due to our simple uniform priors on utilities and discounting). While high noise is more plausible a priori, the noise explanation still needs to posit a low probability series of events.\n\nWe see a similar result if we enrich the set of possible utilities for the Sophisticated path. This time, we allow the `timeCost`, i.e. the cost for taking a single timestep, to be positive. This means the agent prefers to spend as much time as possible moving around before reaching a restaurant. Here are the results:\n\nObserve the sophisticated path with possibly positive timeCost:\n\n<!-- infer_joint_timecost_sophisticated -->\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var alphaPriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var discountPriorDataTable = map(\n function(x){\n return {\n discount: x,\n probability: getPriorProb({ discount: x }),\n distribution: 'prior'\n };\n },\n [0, 1]);\n\n var discountPosteriorDataTable = map(\n function(x){\n return {\n discount: x,\n probability: getPosteriorProb({ discount: x }),\n distribution: 'posterior'\n };\n },\n [0, 1]);\n\n var discountDataTable = append(discountPriorDataTable,\n discountPosteriorDataTable);\n\n viz.bar(discountDataTable, { groupBy: 'distribution' });\n\n var timeCostPriorDataTable = map(\n function(x) {\n return {\n timeCost: x,\n probability: getPriorProb({ timeCost: x }),\n distribution: 'prior'\n };\n },\n [-0.01, 0.1, 1]);\n\n var timeCostPosteriorDataTable = map(\n function(x) {\n return {\n timeCost: x,\n probability: getPosteriorProb({ timeCost: x }),\n distribution: 'posterior'\n };\n },\n [-0.01, 0.1, 1]);\n\n var timeCostDataTable = append(timeCostPriorDataTable,\n timeCostPosteriorDataTable);\n\n viz.bar(timeCostDataTable, { groupBy: 'distribution' });\n};\n\nvar sophisticatedTrajectory = [\n [{\:[3,1],\:false,\:11},\],\n [{\:[3,2],\:false,\:10,\:[3,1]},\],\n [{\:[3,3],\:false,\:9,\:[3,2]},\],\n [{\:[4,3],\:false,\:8,\:[3,3]},\],\n [{\:[5,3],\:false,\:7,\:[4,3]},\],\n [{\:[5,4],\:false,\:6,\:[5,3]},\],\n [{\:[5,5],\:false,\:5,\:[5,4]},\],\n [{\:[5,6],\:false,\:4,\:[5,5]},\],\n [{\:[4,6],\:false,\:3,\:[5,6]},\],\n [{\:[4,7],\:false,\:2,\:[4,6],\:0},\],\n [{\:[4,7],\:true,\:2,\:[4,7],\:1},\]\n];\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30];\n var donut = [uniformDraw(utilityValues), -10]\n var veg = [uniformDraw(utilityValues), 20];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': uniformDraw([-0.01, 0.1, 1])\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: 'sophisticated'\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([0.1, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar posterior = getPosterior(mdp.world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nNext chapter: [Multi-agent models](/chapters/7-multi-agent.html)\n", "date_published": "2017-03-19T18:54:16Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5e-joint-inference.md"} |
| {"id": "431d0938a4c47c6c60344dfaf570595a", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/meetup-2017.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Modeling Agents & Reinforcement Learning with Probabilistic Programming\nhidden: true\n---\n\n## Intro\n\n### Motivation\n\nWhy probabilistic programming?\n- **ML:** predictions based on prior assumptions and data\n- **Deep Learning:** lots of data + very weak assumptions\n- **Rule-based systems:** strong assumptions + little data\n- **Probabilistic programming:** a flexible middle ground\n\nWhy model agents?\n- Build **artificial agents** to automate decision-making\n - Example: stock trading\n- **Model humans** to build helpful ML systems\n - Examples: recommendation systems, dialog systems\n\n### Preview\n\nWhat to get out of this talk:\n- Intuition for programming in a PPL\n- Core PPL concepts\n- Why are PPLs uniquely suited for modeling agents?\n- Idioms for writing agents as PPs\n- How do RL and PP relate?\n\nWhat not to expect:\n- Lots of applications\n- Production-ready systems\n\n## Probabilistic programming basics\n\n### Our language: WebPPL\n\nTry it at [webppl.org](http://webppl.org)\n\n### A functional subset of JavaScript\n\nWhy JS?\n- Fast\n- Rich ecosystem\n- Actually a nice language underneath all the cruft\n- Runs locally via node.js, but also in browser:\n - [SmartPages](https://stuhlmueller.org/smartpages/)\n - [Image inference viz](http://dippl.org/examples/vision.html)\n - [Spaceships](http://dritchie.github.io/web-procmod/)\n - [Agent viz](http://agentmodels.org/chapters/3b-mdp-gridworld.html#hiking-in-gridworld)\n\n~~~~\nvar xs = [1, 2, 3, 4];\n\nvar square = function(x) {\n return x * x;\n};\n\nmap(square, xs);\n~~~~\n\n### Distributions and sampling\n\nDocs: [distributions](http://docs.webppl.org/en/dev/distributions.html)\n\n#### Discrete distributions\n\nExamples: `Bernoulli`, `Categorical`\n\nSampling helpers: `flip`, `categorical`\n\n~~~~\nvar dist = Bernoulli({ p: 0.3 });\n\nvar flip = function(p) {\n return sample(Bernoulli({ p }));\n}\n\nflip(.3)\n~~~~\n\n#### Continuous distributions\n\nExamples: `Gaussian`, `Beta`\n\n~~~~\nvar dist = Gaussian({ \n mu: 1,\n sigma: 0.5\n});\n\nviz(repeat(1000, function() { return sample(dist); }));\n~~~~\n\n#### Building complex distributions out of simple parts\n\nExample: geometric distribution\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n};\n\nviz(repeat(100, function() { return geometric(.5); }));\n~~~~\n\n### Inference\n\n#### Reifying distributions\n\n`Infer` reifies the geometric distribution so that we can compute probabilities:\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n};\n\nvar model = function() {\n return geometric(.5);\n};\n\nvar dist = Infer({\n model,\n maxExecutions: 100\n});\n\nviz(dist);\n\nMath.exp(dist.score(3))\n~~~~\n\n#### Computing conditional distributions\n\nExample: inferring the weight of a geometric distribution\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n}\n\nvar model = function() {\n var u = uniform(0, 1);\n var x = geometric(u);\n condition(x < 4);\n return u;\n}\n\nvar dist = Infer({\n model,\n method: 'rejection',\n samples: 1000\n})\n\ndist\n~~~~\n\n#### Technical note: three ways to condition\n\n~~~~\nvar model = function() {\n var p = flip(.5) ? 0.5 : 1;\n var coin = Bernoulli({ p });\n\n var x = sample(coin);\n condition(x === true);\n \n// observe(coin, true);\n \n// factor(coin.score(true));\n \n return { p };\n}\n\nviz.table(Infer({ model }));\n~~~~\n\n#### A slightly less toy example: regression\n\nDocs: [inference algorithms](http://docs.webppl.org/en/master/inference/methods.html)\n\n~~~~\nvar xs = [1, 2, 3, 4, 5];\nvar ys = [2, 4, 6, 8, 10];\n\nvar model = function() {\n var slope = gaussian(0, 10);\n var offset = gaussian(0, 10);\n var f = function(x) {\n var y = slope * x + offset;\n return Gaussian({ mu: y, sigma: .1 })\n };\n map2(function(x, y){\n observe(f(x), y)\n }, xs, ys)\n return { slope, offset };\n}\n\nInfer({\n model,\n method: 'MCMC',\n kernel: {HMC: {steps: 10, stepSize: .01}},\n samples: 2000,\n})\n~~~~\n\n## Agents as probabilistic programs\n\n### Deterministic choices\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar outcome = function(action) {\n if (action === 'italian') {\n return 'pizza';\n } else {\n return 'steak frites';\n }\n};\n\nvar actionDist = Infer({ \n model() {\n var action = uniformDraw(actions);\n condition(outcome(action) === 'pizza');\n return action;\n }\n});\n\nactionDist\n~~~~\n\n### Expected utility\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = ((action === 'italian') ? \n [0.2, 0.6, 0.2] : \n [0.05, 0.9, 0.05]);\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar expectedUtility = function(action) {\n var utilityDist = Infer({\n model: function() {\n var nextState = transition('initialState', action);\n var u = utility(nextState);\n return u;\n }\n });\n return expectation(utilityDist);\n};\n\nmap(expectedUtility, actions);\n~~~~\n\n### Softmax-optimal decision-making\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = ((action === 'italian') ? \n [0.2, 0.6, 0.2] : \n [0.05, 0.9, 0.05]);\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar alpha = 1;\n\nvar agent = function(state) {\n return Infer({ \n model() {\n\n var action = uniformDraw(actions);\n \n var expectedUtility = function(action) {\n var utilityDist = Infer({\n model: function() {\n var nextState = transition('initialState', action);\n var u = utility(nextState);\n return u;\n }\n });\n return expectation(utilityDist);\n };\n \n var eu = expectedUtility(action);\n \n factor(eu);\n \n return action;\n \n }\n });\n};\n\nagent('initialState');\n~~~~\n\n## Sequential decision problems\n\n- [Restaurant Gridworld](http://agentmodels.org/chapters/3a-mdp.html) (1, last)\n- Structure of expected utility recursion\n- Dynamic programming\n\n\n~~~~\nvar act = function(state) {\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action);\n factor(eu);\n return action;\n }});\n};\n\nvar expectedUtility = function(state, action){\n var u = utility(state, action);\n if (isTerminal(state)){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action);\n var nextAction = sample(act(nextState));\n return expectedUtility(nextState, nextAction);\n }}));\n }\n};\n~~~~\n\n- [Hiking Gridworld](http://agentmodels.org/chapters/3b-mdp-gridworld.html) (1, 2, 3, last)\n- Expected state-action utilities (Q values)\n- [Temporal inconsistency](http://agentmodels.org/chapters/5b-time-inconsistency.html) in Restaurant Gridworld\n \n\n## Reasoning about agents\n\n- [Learning about preferences from observations](http://agentmodels.org/chapters/4-reasoning-about-agents.html) (1 & 2)\n\n## Multi-agent models\n\n### A simple example: Coordination games\n\n~~~~\nvar locationPrior = function() {\n if (flip(.55)) {\n return 'popular-bar';\n } else {\n return 'unpopular-bar';\n }\n}\n\nvar alice = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n var bobLocation = sample(bob(depth - 1));\n condition(myLocation === bobLocation);\n return myLocation;\n }});\n});\n\nvar bob = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n if (depth === 0) {\n return myLocation;\n } else {\n var aliceLocation = sample(alice(depth));\n condition(myLocation === aliceLocation);\n return myLocation;\n }\n }});\n});\n\nalice(5)\n~~~~\n\n### Other examples\n\n- [Game playing: tic-tac-toe](http://agentmodels.org/chapters/7-multi-agent.html)\n- [Language understanding](http://agentmodels.org/chapters/7-multi-agent.html)\n\n## Reinforcement learning\n\n### Algorithms vs Models\n\n- Models: encode world knowledge\n - PPLs suited for expressing models\n- Algorithms: encode mechanisms (for inference, optimization)\n - RL is mostly about algorithms\n- But some algorithms can be expressed using PPL components\n\n### Inference vs. Optimization\n\n~~~~\nvar k = 3; // number of heads\nvar n = 10; // number of coin flips\n\nvar model = function() {\n var p = sample(Uniform({ a: 0, b: 1}));\n var dist = Binomial({ p, n });\n observe(dist, k);\n return p;\n};\n\nvar dist = Infer({ \n model,\n method: ‘MCMC',\n samples: 100000,\n burn: 1000\n});\n\nexpectation(dist);\n~~~~\n\n~~~~\nvar k = 3; // number of heads\nvar n = 10; // number of coin flips\n\nvar model = function() {\n var p = Math.sigmoid(modelParam({ name: 'p' }));\n var dist = Binomial({ p, n });\n observe(dist, k);\n return p;\n};\n\nOptimize({\n model,\n steps: 1000,\n optMethod: { sgd: { stepSize: 0.01 }}\n});\n\nMath.sigmoid(getParams().p);\n~~~~\n\n\n\n### Policy Gradient\n\n~~~~\n///fold:\nvar numArms = 10;\n\nvar meanRewards = map(\n function(i) {\n if ((i === 7) || (i === 3)) {\n return 5;\n } else {\n return 0;\n }\n },\n _.range(numArms));\n\nvar blackBox = function(action) {\n var mu = meanRewards[action];\n var u = Gaussian({ mu, sigma: 0.01 }).sample();\n return u;\n};\n///\n\n// actions: [0, 1, 2, ..., 9]\n\n// blackBox: action -> utility\n\nvar agent = function() {\n var ps = softmax(modelParam({ dims: [numArms, 1], name: 'ps' }));\n var action = sample(Discrete({ ps }));\n var utility = blackBox(action);\n factor(utility);\n return action;\n};\n\n\nOptimize({ model: agent, steps: 10000 });\n\nvar params = getParams();\nviz.bar(\n _.range(10),\n _.flatten(softmax(params.ps[0]).toArray()));\n~~~~\n\n## Conclusion\n\nWhat to get out of this talk, revisited:\n\n- **Intuition for programming in a PPL**\n- **Core PPL concepts**\n - Distributions & samplers\n - Inference turns samplers into distributions\n - `sample` turns distributions into samples\n - Optimization fits free parameters\n- **Idioms for writing agents as probabilistic programs**\n - Planning as inference\n - Sequential planning via recursion into the future\n - Multi-agent planning via recursion into other agents' minds\n- **Why are PPLs uniquely suited for modeling agents?**\n - Agents are structured programs\n - Planning via nested conditional distributions\n- **How do RL and PP relate?**\n - Algorithms vs models\n - Policy gradient as a PP\n\nWhere to go from here:\n- [WebPPL](http://webppl.org) (webppl.org)\n- [AgentModels](http://agentmodels.org) (agentmodels.org)\n- andreas@ought.com\n", "date_published": "2017-03-30T16:34:45Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "meetup-2017.md"} |
| {"id": "5e84ffd2457a2329bd197d1f4b94e5ab", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6-efficient-inference.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Efficient inference\ndescription: Difficulty of inference, in particular for POMDPs and inverse planning. Outline of inference strategies.\nstatus: stub\nis_section: true\nhidden: true\n---\n", "date_published": "2016-03-09T21:34:05Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6-efficient-inference.md"} |
| |