Alright, so yesterday I was messing around, trying to see if I could predict the Clemson vs. FAU game. Just for kicks, you know?

First thing I did was hit up some sports sites, trying to get a feel for what the “experts” were saying. Looked at a bunch of pre-game analyses, you know, the ones with all the stats and fancy charts. Honestly, it was a bit overwhelming.
Then, I figured, “Hey, why not try to build a simple model?” I pulled some data from ESPN – past game results, player stats, you name it. Just the basic stuff. Cleaned it up a bit in Excel, you know, removing the weird characters and making sure the dates were right. That took longer than I thought it would.
Next, I jumped into Python. Loaded up the data using Pandas. I was thinking, maybe I could just look at the average points scored by each team, their win percentage, and use that to guess the outcome. Super simple, nothing fancy.
I wrote a quick script to calculate those averages. Then, I tried to come up with some kind of weighting system. Like, maybe wins against tougher opponents should count for more. I played around with different weights, just tweaking them until the model seemed to be “predicting” past games reasonably well. It was all very subjective, I admit.
After that, I ran the model on the Clemson vs. FAU matchup. It spat out a “predicted” score. I’m not gonna lie, it was probably way off, but it was a fun exercise.

Finally, I compared my “prediction” to the actual outcome of the game. Let’s just say my model needs some serious work. But hey, I learned a few things, and that’s what matters, right?
- Data Collection: Scraped data from sports websites.
- Data Cleaning: Used Excel to clean up the data.
- Model Building: Built a simple statistical model in Python.
- Prediction: Ran the model to predict the game outcome.
- Evaluation: Compared the prediction to the actual result.
Might try a more sophisticated approach next time, maybe using some machine learning. But for now, this was a good way to spend an afternoon.