Table of Contents
- Introduction
- Multi-Armed Bandits Explained
- Algorithms and Techniques
- Application to Recommendation Models
- Future Directions
- Conclusions
Introduction
When you have an app or website one of your goals is to get people to engage with what you are creating, whether it be a product, service or something in between.
Recommendation systems play a crucial role in helping businesses attract, engage, and retain customers. However, the effectiveness of these systems largely depends on the accurate assignment of customers to the most suitable recommendation models. This is where multi-armed bandit algorithms come into play. In this blog post, we will explore how aligning multi-armed bandits for dynamic customer allocation can significantly optimize the performance of recommendation models.
Multi-Armed Bandits Explained
For historical context the term Bandit comes from the slot machines present in casinos, they were coined bandit since they ended up “stealing your money” just like a bandit would. Multi-Armed comes from fact people used to pull the levers (arms) of many slot machines at once.
Defintions
Lots of deep theory exists in this space but from a practical perspective having a basic grasp on some defintions is more than enough to begin working with these algorithms.
- Bandit specific
- The Multi-Armed Bandit Problem essentially is the problem of not knowing what the optimal decision to make is when faced with multiple options (“arms”).
- The process by which you can attempt solve these problems is to understand the specific context and then apply Multi-Armed Bandit Algorithms which are purposefully designed to tackle these problems.
- Reward just refers to the payoff or return recieved after selecting a particular action or arm of the multi-armed-bandit.
- Each arm of the bandit algorithm is associated with a reward distribution which tells you the likliehood of recieving a reward on any given round. When a given arm is pulled the reward recieved can be thought of as a realisation of this distribution.
- This distribution typically is unknown to the entity pulling the arms.
- The goal is to maximize the reward over a series of trials (arm pullings) while minimizing the regret, which is the difference between the reward recieved from an arm pull and the maximum reward that you would recieve if you always had chosen the best arm.
- Just like in RL the reward hypothesis can be used to assert that all goals can be thought of as a maximization of cumulative reward.
- The rewards themselves are stochastic in nature, meaning that they are randomly dertermined by the underlying probability distribution.
- This randomness introduces uncertainty and complexity into the decision making process as you must balance the tradeoff between exploring new arms to gather more information about their reward distributions (exploration) and exploiting known arms that have shown a high rewards in the past (exploitation). This is known as the exploration-exploitation trade-off problem.
- Some problem forumlations might have rewards that are context dependent where the reward distributions change based on external or contextual information available at the time of making a decision. This could add an additional strategic layer to the problem as the optimal action may vary with changes in the context.
- The Multi-Armed Bandit Problem essentially is the problem of not knowing what the optimal decision to make is when faced with multiple options (“arms”).
- Business specific
- An episode refers to the duration of time which a given customer stays assigned to the same bandit after which they are reallocated based on some process.
- When applying customer allocations a standard way is by staggering.
- The best way to think about it is as follows:
- Take the full customer base.
- Divide them into groups dependent on the length of the episodes (e.g if episode length = 28 days then group size = 1/28th of the base).
- Each day a single group gets allocated to an experiment.
- Therefore each day there would exist a group that is expiring from there previous episode and getting reallocated to another experiment.
- After a full episode length (e.g 28 days) all customers (across all groups) would have been through an entire episode and therefore would begin being reassigned (rolled over) to another episode.
- The best way to think about it is as follows:
Resources
Theory is well established in this space and as of now a few universally recognised reference materials exist along with a host of additional articles.
- Books
- Introduction to Multi-Armed Bandits; Aleksandrs Slivkins
- Universally recognised textbook, not super long but gives a good touch on most things.
- Bandit Algorithms; Tor Lattimore & Csaba Szepesvari
- Universally recognised reference book, alot longer but provides good technical grounding on the topic.
- Introduction to Multi-Armed Bandits; Aleksandrs Slivkins
- Lecture Material
- Online Learning and Multi-Armed Bandits Oxford
- Provides some links to prerequisite materials which is nice.
- Online Probability Summary Book
- Useful to brush up on Beta & Gamma distributions in particular.
- Online Learning and Multi-Armed Bandits Oxford
- Articles
- Introduction to Thompson Sampling: the Bernoulli bandit
- Good intro with particularily nice visualations to help internalise the bandit learning process.
- Introduction to Thompson Sampling: the Bernoulli bandit
Algorithms and Techniques
Plenty of algorithms and variants of them exist, we’ll touch on the three more common ones.
ε-Greedy
Pros:
- Simplicity: Easier to implement and understand in terms of how it implements the trae-off between exploration and exploitation.
Cons:
- Inefficient Exploration: Randomly explores at a fixed rate (set by \(\epsilon\)). Intuitively you should explore more initially to gain data and overtime once you have gathered more data you should exploit however in this case depending on your \(\epsilon\) you might not explore enough initially and not exploit enough later on.
- Flexibility: You’d want to choose $\epsilon$ based on the context of the problem but tuning it to find the right value might not be easy.
- Sub-optimal Decisioning: Due to constant exploration rate $\epsilon$ converging to the optimal bandit might take longer.
ε-Greedy Algorithm
Upper Confidence Bound (UCB)
Pros:
- Incorporating uncertainty: Instead of random exploration of bandits at a constant rate the exploration is embedded into the measure the algorithm is maximising by weighting with the frequency that the particular bandit in question has been assigned up until that point. By adding the extra square root term it upweights things which haven’t been seen as much, making it a better explorer than ε-greedy.
- Higher average reward: UCB doesn’t randomly explore and often achieves a higher average reward compared to ε-greedy since the additional term means more exploration earlier on (less reward) but way less for later rounds (more reward) causing the average to be better more often than not down the stretch. This also fits more closely with our intuition.
Cons:
- Complexity: It is more complex to implement and understand compared to ε-greedy.
- Computational Overhead: Since the algorithm requires calculating an additional confidence bound it can be a little more demanding than ε-greedy.
- Initial Exploration: Due to weighted exploration it may initially explore sub-optimal bandits more frequently meaning higher initial regret.
UCB Algorithm
Thompson Sampling
Pros:
- Optimal Uncertainty Handling: Uses a Bayesian approach to model uncertainty, this is more sophisticated and has been shown to be empirically better than the other methods.
- More Adaptive: The exploration leverages the idea of sampling from posterior distributions which is a better representation than trying to either completely randomly explore (case of greedy) or altering the update measure (case of UCB). This leads to more efficient exploration and exploitation.
- Better Performance: It has been shown to outperform other algos in terms of cumulative reward across rounds.
Cons:
- Complexity: It is more complex to implement and understand compared to the other techniques.
- Computational Cost: Since the algorithm requires sampling and keeping track of the entire posterior distribution it can be costly especially if you have lots of bandits and rounds.
- Parameter Sensitive: It can be parameter sensitive in the sense that your choice on prior distributions.
Thompson Sampling Algorithm
Application to Recommendation Models
In the context of recommendation systems, multi-armed bandits can be applied to dynamically allocate customers to different recommendation models. Each recommendation model represents an “arm,” and the goal is to assign customers to the model that is most likely to provide them with relevant and satisfying recommendations. By using multi-armed bandit algorithms, the system can continuously learn and adapt its customer assignment strategy based on the observed performance of each recommendation model.
The better the performance the more assignments the arm will get and vice a versa.
Below I will share some of the core steps along with considerations you can take in order to apply bandit algorithms to your data. Specifically I’ll be showcasing the application of the Thompson Sampling algorithm
Defining our data
You need data for training your bandit algorithm as well as choosing the customers you want to perform assignments for. For demo purposes I’ll simultate some random data but in practice you’d want to create your dataset given your data sources. This most likely would require using some SQL dilect to transform your source data into something usable.
The key fields required:
- customer_id : int Some unique identifier for a customer, you can use multiple fields in practice to define a unique customer if a single id doesn’t exist.
- bandit_name : str The bandit that the given customer has been assigned too.
- reward : int Some reward corresponding to the performance of the customer, we will consider discrete reward her which could correpond to whether the customer clicked or engaged while being recommended content from the given bandit.
The training data typically would be some time back while the scoring would be based on the most recent events.
training_data = generate_fictitious_data(num_rows=10_000, customer_id_start=1)
assignment_data = generate_fictitious_data(num_rows=5000, customer_id_start=10_001)
generate_fictitious_data function
Training our bandit
- High level steps:
- Setup a dataset that can store our bandit parameters.
- Take these parameters and initialise our Multi-Armed (Thompson Sampling) bandit algo with them.
- Feed in our historic training dataset and use it to update our Multi-Armed bandit model.
- Update (overwrite) our stored bandit parameters with the final parameters after training.
- We can also add in some visualisations too.
- Visualise the posterior distributions for each bandit after training to gauge the models understanding.
Initialise key constants
episode_length = 28
bandit_names = ['A', 'B', 'C']
Create a bandit parameter dataset
You’ll want to have someplace to store your bandit parameters since over time these will change and you’d need to use the latest parameters during assignment. Here I have decided to create a dataframe to hold these but for a real production system you could use some data store.
bandit_parameters = create_bandit_parameters_dataframe(bandit_names)
bandit_parameters.head()
create_bandit_parameters_dataframe function
Train your bandit
We can now train our bandit by applying it to our historical training data, this allows the algorithm to learn upfront which bandit arm tends to give the most reward and so can these learnings can be put into good use when assigning customers.
ts, bandit_parameters = train_bandits(bandit_parameters,
training_data,
bandit_names)
bandit_parameters.head()
train_bandits functionality
Visualise the posterior distributions
It’s useful to get a feel for exactly what the model has learned and this can be done by visualising the posterior distributions generated for each of the bandits limbs. If the distributions are heavily peaked with differing expected values this can mean the model will exploit more since it’s more likely that a large reward will come from the distribution with the higher expected value (more divergence implies less overlap therefore unlikely to get anything from the low distributions).
plot_posterior_distributions(ts)
plot_posterior_distributions function
As you can see the distributions are fairly close togther and not divergent. This makes intuitive sense as it aligns with what we’d expect given the process used to construct our training dataset.
Performing customer assignment
- High level steps:
- Grab the customers that you want to allocate.
- Take the initially trained bandit algorithm and apply it to each customer making sure to update its parameters after each assignment.
- Make the assignments for all customers and store them alongside other metadata such as dates for the start and end of the assignment inside some data structure.
- Update our stored bandit parameters with the final values.
- We can also add some visualisations in too:
- Visualise the customer groups most recent reward distribution (based on the episode they are rolling out of).
- Visualise the final bandit allocation distributions.
Visualsing real reward distribution
plot_reward_distribution(assignment_data,
group_col="bandit_name",
reward_col="reward")
plot_reward_dsitrbution function
Performing Assignments
customer_assignments, bandit_parameters = assign_customers(bandit_parameters,
assignment_data,
bandit_names,
episode_length,
ts)
customer_assignments.head()
assign_customers function
Visualising assignment distributions
plot_bandit_assignment_distributions(customer_assignments,
"bandit_name",
"end_date")
plot_bandit_assignment_distributions function
Future Directions
Many ways to extend the work here, I have listed a fiew directions below:
- Handle continuous rewards
- At the moment the Thompson Sampling algorithm only applies for the discrete reward case.
- To adapt for continuous rewards things like the bayesian prior and update logic will need to be extended.
- Over aggresive assignment handling
- At the moment if you train on too much data your posterior distributions can become extremely peaked and divergent which can lead to almost always assigning to the most performant bandit.
- This might not always be desired espeically in a practical setting as you may want some variation to continue to monitor behaviour between the recommendation models.
- Some potential ways to adapt are:
- Sliding Window: This would cap the total amount of rewards that a given bandit could recieve thereby limiting how peaked the distributions can become. Challenge might be optimizing the threshold for the problem.
- Contextual reward handling
- In alot of real world cases the reward can vary over time due to a variety of factors including the context.
- To handle this you’d want to adapt the “Vanilla MAB” approach for more of a “Contextual MAB” approach.
- Plently of resources touch on it, this somewhat recent lecture at the PyData conference might be a nice place to start.
Conclusions
With all this we come to the end, hopefully it’s been insightful and have got a peer into the purpose behind Multi-Armed Bandit problems and specifically how Thompson Sampling can be appled to the case of Customer Assignment and how it can be beneficial.