gonna nerd out rq
you can actually use some fundamental concepts of machine learning in charting osu!mania beatmaps
i found out because i was trying to chart in 0.5x playback speed and 1.5x speed (for convenience and accuracy) but by doing this i was prone to overfitting and underfitting respectively. By charting in 0.5x playback speed, you are confined into a more small space (imagien charting a 4minute song in 0.5x speed all over, sure it'd be consistent but it would be like charting an 8 minute song) because you wont realistically chart the whole song in 0.5x playback speed, by decreasing the playback speed you can hear much mroe of the music which also amplifies the random noise in the music, by charting in this playback speed you are prone to overfitting, where you map and model the chart into random noise which doesn't play well overall and it's hard to extrapolate (charting after the section in normal speed)
on the other hand, if we chart in 1.5x playback speed we are more prone to underfitting! by charting at such high speeds, we may miss more notable sounds because it all the sound effectively gets meshed up together, and with your sense of difficulty broken, you will chart the 1.5x playback speed section with less density even though it would've been fine to have a high density if we were playing in 1x playback speed
this is effectively the bias-variance tradeoff in osu!mania :D and we need to find the point which minimises the bias (underfitting) and variance (overfitting).
this is why we should have charted some parts at 1x speed, so we can use the playback speed whenever needed and in the right times, this is essentially validation testing and tweaking the hyperparameters in conclusion, although playback speed could give you a new perspective to the song, we need to have grounded truth (validation & test sets) to effectively make a good map, if it extrapolates well, then we know it's a good model!