I have released v6.1 of the model. Large improvement in accuracy but most specifically in ROI. So let's talk betting strategy. One question I get asked practically every day is, "why do you recommend only the AI picked winners even if they're not +EV?"
The best tech available for predicting UFC are binary classification algorithms. These are algorithms specialized in classifying a fighter as either 1 or 0, win or loss. The confidence score that the algorithm comes up with is not its fundamental strength.
"So why don't you calibrate it?"
Because the calibration tech we have for these algorithms suck. Platt Scaling and Isotonic Regression are the two most potent weapons we have to calibrate these predictions. I have experimented a lot with them. 100% of the time they suck. They lower the accuracy and the ROI. I've experimented with custom calibrations but it's just too inconsistent and I don't see a promising way forward. I would absolutely love to have better calibration, so if you have any ideas please let me know. That being said, the ultimate goal here is risk-adjusted returns. Calibration is a tool to help us get there but it's not necessary to acheiving the goal.
Anyway, who cares about calibration if we're making fat ROI? Take off your sports bettor hat and all the "fundamentals" you know like +EV is all that matters to be successful. Put on your machine learning cap and look at the data. This is the calibration curve:

Note how the model is highly underconfident in its 50-65% confidence score picks and highly overconfident in it's 35-50% confidence scores. The nature of the algorithm is to maximize it's ability to successfully pick the winner and it does this at the expense of accurate calibration. It clusters it's confidence scores in the 50-65% range. AND THAT'S OK!
Here's the model evaluations and vegas evaluations on the last year of unseen fights:

Note the model is more accurate than Vegas, but it's logloss and Brier scores (essentially real world win%) are worse. Again, THIS IS OK. Why? Because look at these profit strategy backtests over the last year of unseen fights:

The fundamental strategy of betting on 100% of the AI picks on the closing odds is 14% profitable (ai_all_picks_closing). The strategy of picking only the +EV fighter based on the AI confidence score whether the AI picked that fighter to win or not is -0.5% unprofitable (any_fighter_positive_ev_closing).
Interpreting The Strategies
None of this is to say that +EV isn't a helpful metric. Look at another fundamental strategy, betting only on +EV AI-picked fighters (ai_picked_positive_ev_closing): +24.2% ROI. 10% higher than just doing down the line bets but it means sometimes there'll be events where you don't get to bet which makes me sad.
So let's put this all together. Why are parlays so good with this machine learning algorithm? It's because the algorithm is so gosh darn good at picking winners. Parlaying the AI picks, whether +EV or not, is a highly profitable strategy. The best strategy is doing 2 to 3 leg parlays on the +EV AI picked fighters. I'll probably move into posting these parlays again in the homepage.
EDIT: Shoutout to @Heisenb3rg on X for bringing Sharpe Ratios to my attention! This is a crucial metric from finance that measures risk-adjusted returns. Essentially, it tells us how much return we're getting for the amount of risk (volatility) we're taking on with a particular betting strategy. A higher Sharpe Ratio is better.

Looking at the Sharpe Ratio analysis, we can see how different strategies compare not just on raw ROI, but also on how bumpy the ride is. For v6.1, the 'ai_picked_positive_ev_closing' strategy seems to offer a strong balance, providing good returns without excessive risk compared to some others. However, the 'ai_all_picks_closing' still performs well from a risk-adjusted perspective, offering broader betting opportunities.
So, what's the takeaway? While different model versions might favor underdogs or favorites differently, the data consistently points towards a simple, robust approach. Considering both the ROI and the Sharpe Ratio (risk-adjusted return), the strategy of betting on all AI picks using the odds available 7 days before the event (ai_all_picks_7day) emerges as the winner. It offers the best blend of profitability and lower volatility, making it the simplest and most effective core strategy based on the current analysis.
It's important to remember that model performance, especially regarding favorites versus underdogs, can shift between versions and timelines. Therefore, relying solely on specific odds ranges like underdogs only might be less reliable long-term despite being very profitable over the past year. Instead, I think focusing on fundamental strategies evaluated through metrics like ROI and Sharpe Ratio is key. For the current model, based on the patterns I've seen over the years, I think I'm going to stick with the core strategy ai_all_picks_sevenday which provides the best risk-adjusted performance indicated by the Sharpe Ratio. To get higher returns at higher risk, 2 to 3 leg parlays of your choice are a good option.